url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/9630 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9630/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9630/comments | https://api.github.com/repos/huggingface/transformers/issues/9630/events | https://github.com/huggingface/transformers/issues/9630 | 787,331,313 | MDU6SXNzdWU3ODczMzEzMTM= | 9,630 | key error when use trainer to fine_tuning a dataset | {
"login": "XiaoYang66",
"id": 43234824,
"node_id": "MDQ6VXNlcjQzMjM0ODI0",
"avatar_url": "https://avatars.githubusercontent.com/u/43234824?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/XiaoYang66",
"html_url": "https://github.com/XiaoYang66",
"followers_url": "https://api.github.com/users/XiaoYang66/followers",
"following_url": "https://api.github.com/users/XiaoYang66/following{/other_user}",
"gists_url": "https://api.github.com/users/XiaoYang66/gists{/gist_id}",
"starred_url": "https://api.github.com/users/XiaoYang66/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/XiaoYang66/subscriptions",
"organizations_url": "https://api.github.com/users/XiaoYang66/orgs",
"repos_url": "https://api.github.com/users/XiaoYang66/repos",
"events_url": "https://api.github.com/users/XiaoYang66/events{/privacy}",
"received_events_url": "https://api.github.com/users/XiaoYang66/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Duplicate of #9636"
] | 1,610 | 1,610 | 1,610 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.1.1
- Platform: Linux-3.10.0-693.5.2.el7.x86_64-x86_64-with-centos-7.4.1708-Core
- Python version: 3.7.9
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
ray/raytune: @richardliaw @amogkam
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@sgugger
## Information
Model I am using (Bert, XLNet ...):bert-base-uncased
The problem arises when using:
* [ ] the official example scripts: (give details below)
i am fine-tuning a text_claasifiction on dbpedia_14.and i followed this colab https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/text_classification.ipynb#scrollTo=TlqNaB8jIrJW
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
datset:dbpedia_14
## To reproduce
Steps to reproduce the behavior:
error
`File "train.py", line 69, in <module>
trainer.train()
File "/home/pliu3/projects/anaconda3/envs/calibration/lib/python3.7/site-packages/transformers/trainer.py", line 784, in train
for step, inputs in enumerate(epoch_iterator):
File "/home/pliu3/projects/anaconda3/envs/calibration/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 435, in __next__
data = self._next_data()
File "/home/pliu3/projects/anaconda3/envs/calibration/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 475, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/home/pliu3/projects/anaconda3/envs/calibration/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/pliu3/projects/anaconda3/envs/calibration/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
KeyError: 2`
code
```python
dataset_name = 'sem_eval_2014_task_1'
num_labels_size = 3
batch_size = 4
model_checkpoint = 'bert-base-uncased'
number_train_epoch = 5
def tokenize(batch):
return tokenizer(batch['premise'], batch['hypothesis'], truncation=True, )
def compute_metrics(pred):
labels = pred.label_ids
preds = pred.predictions.argmax(-1)
precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='micro')
acc = accuracy_score(labels, preds)
return {
'accuracy': acc,
'f1': f1,
'precision': precision,
'recall': recall
}
model = BertForSequenceClassification.from_pretrained(model_checkpoint, num_labels=num_labels_size)
tokenizer = BertTokenizerFast.from_pretrained(model_checkpoint, use_fast=True)
train_dataset = load_dataset(dataset_name, split='train')
test_dataset = load_dataset(dataset_name, split='test')
train_encoded_dataset = train_dataset.map(tokenize, batched=True)
test_encoded_dataset = test_dataset.map(tokenize, batched=True)
args = TrainingArguments(
output_dir='./results',
evaluation_strategy="epoch",
learning_rate=2e-5,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
num_train_epochs=number_train_epoch,
weight_decay=0.01,
do_predict=True
)
trainer = Trainer(
model=model,
args=args,
compute_metrics=compute_metrics,
train_dataset=train_encoded_dataset,
eval_dataset=test_encoded_dataset,
tokenizer=tokenizer
)
trainer.train()
trainer.evaluate()
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9630/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9630/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9629 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9629/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9629/comments | https://api.github.com/repos/huggingface/transformers/issues/9629/events | https://github.com/huggingface/transformers/issues/9629 | 787,301,182 | MDU6SXNzdWU3ODczMDExODI= | 9,629 | [Question] How to use threads for huggingface transformers | {
"login": "Vildnex",
"id": 10352059,
"node_id": "MDQ6VXNlcjEwMzUyMDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10352059?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Vildnex",
"html_url": "https://github.com/Vildnex",
"followers_url": "https://api.github.com/users/Vildnex/followers",
"following_url": "https://api.github.com/users/Vildnex/following{/other_user}",
"gists_url": "https://api.github.com/users/Vildnex/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Vildnex/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Vildnex/subscriptions",
"organizations_url": "https://api.github.com/users/Vildnex/orgs",
"repos_url": "https://api.github.com/users/Vildnex/repos",
"events_url": "https://api.github.com/users/Vildnex/events{/privacy}",
"received_events_url": "https://api.github.com/users/Vildnex/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, thank you for opening an issue! Could you put the full stack-trace?\r\n\r\nI guess this comes from the tokenizer, rather than the model as we've already seen this error in [tokenizers](https://github.com/huggingface/tokenizers/issues/537).\r\n\r\nAs a means of debugging, can you let me know what happens if you change this line:\r\n```py\r\ntokenizer = AutoTokenizer.from_pretrained(model_name)\r\n```\r\nto\r\n```py\r\ntokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False)\r\n```",
"When use_fast is set to false I no longer see the borrow exception but randomly experience this one instead. This appears to be an issue in the qa pipeline code rather than tokenizer code though. I can test the tokenizer in isolation if that'd be helpful.\r\n\r\n```\r\n/usr/local/lib/python3.6/dist-packages/transformers/pipelines/question_answering.py in <listcomp>(.0)\r\n 360 ),\r\n 361 }\r\n--> 362 for s, e, score in zip(starts, ends, scores)\r\n 363 ]\r\n 364 else:\r\n\r\nKeyError: 132\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,610 | 1,619 | 1,619 | NONE | null | I'm trying to run a hugging face model, mode exactly **"cardiffnlp/twitter-roberta-base-sentiment"** on threads. But at the same time, I want just one single instance of it because it's really costly in terms of time.
In other words, I have multiple CSV files (several thousand) and each of them has around 20k-30k lines and I want that each line from all of them to be executed by the huggingface model, as you probably can imagine already this is the reason why I don't want to instantiate a model for each thread (where each thread would be used just to read one line and write it in the database).
The problem with my approach is that when I'm running the code is going to give me an error from huggingface model.
> RuntimeError: Already borrowed
Could any of you help me to understand how cand I fix it?
***Hugging face model:***
class EmotionDetection(object):
def __init__(self, model_name="cardiffnlp/twitter-roberta-base-sentiment"):
self.model_name = model_name
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
self.classifier = TextClassificationPipeline(model=model, tokenizer=tokenizer, return_all_scores=True,
task="sentiment-analysis", device=0)
def get_emotion_by_label(self, label: str):
if label == "LABEL_0":
return "negative"
elif label == "LABEL_1":
return "neutral"
elif label == "LABEL_2":
return "positive"
else:
print("SOMETHING IS WRONG")
return ""
def get_emotion(self, phrase):
results = self.classifier(phrase)
res = dict()
for result in results:
for emotion in result:
res.update({self.get_emotion_by_label(emotion['label']): emotion['score']})
return res
***My code for generating database:***
class GenerateDbThread(object):
def __init__(self, text: str, created_at: datetime.datetime, get_emotion_function, cursor, table_name):
self.table_name = table_name
self.text = text
self.created_at = created_at
emotions = get_emotion_function(self.text)
self.pos = emotions['positive']
self.neg = emotions['negative']
self.neu = emotions['neutral']
self.cursor = cursor
def execute(self):
query = f"INSERT INTO {self.table_name}(date, positive, negative, neutral, tweet) " \
f"VALUES (datetime('{str(self.created_at)}'),{self.pos},{self.neg},{self.neu}, '{self.text}')"
self.cursor.execute(query)
self.cursor.commit()
def get_all_data_files_path(data_dir: str):
return [f for f in os.listdir(data_dir) if os.path.isfile(os.path.join(data_dir, f))]
def run(file: str, table_name: str):
df = pd.read_csv(os.path.join('data', file), delimiter=',')
for index, row in df.iterrows():
text = row['tweet']
language = row['language']
split_data = row['created_at'].split(" ")
GTB_Time = f"{split_data[2]} {split_data[3]} {split_data[4]}"
created_at = datetime.datetime.strptime(row['created_at'], f"%Y-%m-%d %H:%M:%S {GTB_Time}")
if language == "en":
GenerateDbThread(text, created_at, emotion_detector.get_emotion, cursor, table_name)
def init_db(db_name, table_name):
conn = sqlite3.connect(db_name)
cursor = conn.cursor()
cursor.execute(f"""
CREATE TABLE IF NOT EXISTS {table_name} (
uid INTEGER PRIMARY KEY AUTOINCREMENT,
date DATETIME NOT NULL,
positive REAL NOT NULL,
negative REAL NOT NULL,
neutral REAL NOT NULL,
text TEXT NOT NULL
)""")
cursor.execute(f"CREATE INDEX IF NOT EXISTS ix_tweets_index ON {table_name}(uid)")
cursor.close()
ex = ThreadPoolExecutor(max_workers=10)
files = get_all_data_files_path('data')
init_db("DB_NAME.db", "TABLE_NAME")
emotion_detector = EmotionDetection()
conn = sqlite3.connect("DB_NAME.db")
cursor = conn.cursor()
pbar = tqdm(total=len(files))
futures = [ex.submit(run, file, "TABLE_NAME") for file in files]
for future in futures:
res = future.result()
pbar.update(1)
pbar.close()
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9629/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9629/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9628 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9628/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9628/comments | https://api.github.com/repos/huggingface/transformers/issues/9628/events | https://github.com/huggingface/transformers/issues/9628 | 787,299,592 | MDU6SXNzdWU3ODcyOTk1OTI= | 9,628 | Issue with TrainingArguments docs. | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! Do you want to open a PR to fix it? Thanks!"
] | 1,610 | 1,611 | 1,611 | CONTRIBUTOR | null | Hi Team,
This is a minor issue but on this [link](https://huggingface.co/transformers/main_classes/trainer.html#transformers.TrainingArguments), the default optimizer mentioned is Adam in the docs. However, the `Trainer` uses AdamW by default.
This is slightly misleading.
Thanks,
Gunjan
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9628/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9628/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9627 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9627/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9627/comments | https://api.github.com/repos/huggingface/transformers/issues/9627/events | https://github.com/huggingface/transformers/issues/9627 | 787,257,598 | MDU6SXNzdWU3ODcyNTc1OTg= | 9,627 | Passing in custom BartForConditionalGeneration model as generator to RagSequenceForGeneration | {
"login": "nitarakad",
"id": 18504534,
"node_id": "MDQ6VXNlcjE4NTA0NTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/18504534?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nitarakad",
"html_url": "https://github.com/nitarakad",
"followers_url": "https://api.github.com/users/nitarakad/followers",
"following_url": "https://api.github.com/users/nitarakad/following{/other_user}",
"gists_url": "https://api.github.com/users/nitarakad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nitarakad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nitarakad/subscriptions",
"organizations_url": "https://api.github.com/users/nitarakad/orgs",
"repos_url": "https://api.github.com/users/nitarakad/repos",
"events_url": "https://api.github.com/users/nitarakad/events{/privacy}",
"received_events_url": "https://api.github.com/users/nitarakad/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Solved by using `RagConfig` and initializing it with a `DPRQuestionEncoder` and the custom `BartForConditionalGeneration` generator configs. Passed the `RagConfig`, question encoder, generator, and retriever, to `RagModel` to initialize the model.",
"Could you provide a complete sample code about how to do it? I'm stuck at how to do the config initialization for a DPR + customized BartForCondGen. Thanks!"
] | 1,610 | 1,630 | 1,610 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.1
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101 (False)
- Tensorflow version (GPU?): 2.4.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten, @lhoestq
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
ray/raytune: @richardliaw @amogkam
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using:
`RagSequenceForGeneration` using pretrained `facebook/rag-sequence-nq` with a custom generator initialized with `BartForConditionalGeneration`
The problem arises when using:
In the docs (https://huggingface.co/transformers/model_doc/rag.html) it is stated that a `generator` can be used when initializing `RagSequenceForGeneration`. When using a custom pretrained BART model as the `generator`, I get the error:
`ModuleAttributeError: 'BartForConditionalGeneration' object has no attribute 'to_dict'`
To troubleshoot, I initialized my generator using pretrained `facebook/bart-basee` and RAG model the following way:
```
from transformers import RagTokenizer, RagRetriever, RagSequenceForGeneration, BartForConditionalGeneration
model_name = 'facebook/bart-base'
generator = BartForConditionalGeneration.from_pretrained(model_name)
tokenizer = RagTokenizer.from_pretrained("facebook/rag-sequence-nq")
retriever = RagRetriever.from_pretrained("facebook/rag-sequence-nq", index_name="exact", use_dummy_dataset=True)
model = RagSequenceForGeneration.from_pretrained("facebook/rag-sequence-nq", retriever=retriever, generator=generator)
```
However, I get the same `ModuleAttributeError`.
The tasks I am working on is:
I want to initialize a `RagSequenceForGeneration` with a custom generator.
## To reproduce
Steps to reproduce the behavior:
1. Run the code block above and the error is outputted: `ModuleAttributeError: 'BartForConditionalGeneration' object has no attribute 'to_dict'`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Expected model to initialize with a custom generator, as described in the docs (https://huggingface.co/transformers/model_doc/rag.html).
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9627/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9627/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9626 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9626/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9626/comments | https://api.github.com/repos/huggingface/transformers/issues/9626/events | https://github.com/huggingface/transformers/pull/9626 | 787,195,125 | MDExOlB1bGxSZXF1ZXN0NTU1OTQ4MjUw | 9,626 | Fix: torch.utils.checkpoint.checkpoint attribute error. | {
"login": "devrimcavusoglu",
"id": 46989091,
"node_id": "MDQ6VXNlcjQ2OTg5MDkx",
"avatar_url": "https://avatars.githubusercontent.com/u/46989091?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/devrimcavusoglu",
"html_url": "https://github.com/devrimcavusoglu",
"followers_url": "https://api.github.com/users/devrimcavusoglu/followers",
"following_url": "https://api.github.com/users/devrimcavusoglu/following{/other_user}",
"gists_url": "https://api.github.com/users/devrimcavusoglu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/devrimcavusoglu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/devrimcavusoglu/subscriptions",
"organizations_url": "https://api.github.com/users/devrimcavusoglu/orgs",
"repos_url": "https://api.github.com/users/devrimcavusoglu/repos",
"events_url": "https://api.github.com/users/devrimcavusoglu/events{/privacy}",
"received_events_url": "https://api.github.com/users/devrimcavusoglu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,610 | 1,610 | 1,610 | CONTRIBUTOR | null | # What does this PR do?
Fixes #9617 along with the other `modeling_<modelname>.py` as well where the import statements are missing.
## Who can review?
@LysandreJik, @patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9626/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9626/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9626",
"html_url": "https://github.com/huggingface/transformers/pull/9626",
"diff_url": "https://github.com/huggingface/transformers/pull/9626.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9626.patch",
"merged_at": 1610962419000
} |
https://api.github.com/repos/huggingface/transformers/issues/9625 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9625/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9625/comments | https://api.github.com/repos/huggingface/transformers/issues/9625/events | https://github.com/huggingface/transformers/issues/9625 | 787,157,526 | MDU6SXNzdWU3ODcxNTc1MjY= | 9,625 | Weighted Loss in BertForTokenClassification | {
"login": "krishanudb",
"id": 11831343,
"node_id": "MDQ6VXNlcjExODMxMzQz",
"avatar_url": "https://avatars.githubusercontent.com/u/11831343?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/krishanudb",
"html_url": "https://github.com/krishanudb",
"followers_url": "https://api.github.com/users/krishanudb/followers",
"following_url": "https://api.github.com/users/krishanudb/following{/other_user}",
"gists_url": "https://api.github.com/users/krishanudb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/krishanudb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/krishanudb/subscriptions",
"organizations_url": "https://api.github.com/users/krishanudb/orgs",
"repos_url": "https://api.github.com/users/krishanudb/repos",
"events_url": "https://api.github.com/users/krishanudb/events{/privacy}",
"received_events_url": "https://api.github.com/users/krishanudb/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | closed | false | null | [] | [
"In PyTorch, [`nn.CrossEntropyLoss`](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html) has an optional `weight` parameter which you can specify. This should be a 1D Tensor assigning a weight to each of the classes.\r\n\r\nSo if you want `BertForTokenClassification` with a weighted cross entropy loss, you can simply replace [this line](https://github.com/huggingface/transformers/blob/c60e0e1ee45f4bf1017736b146c51729f120bb83/src/transformers/models/bert/modeling_bert.py#L1685) by a weighted loss. For example, you can define it as follows (I just copied the relevant code from `modeling_bert.py` and slightly adapted the cross entropy loss):\r\n\r\n```\r\nclass BertForTokenClassification(BertPreTrainedModel):\r\n\r\n _keys_to_ignore_on_load_unexpected = [r\"pooler\"]\r\n\r\n def __init__(self, config):\r\n super().__init__(config)\r\n self.num_labels = config.num_labels\r\n\r\n self.bert = BertModel(config, add_pooling_layer=False)\r\n self.dropout = nn.Dropout(config.hidden_dropout_prob)\r\n self.classifier = nn.Linear(config.hidden_size, config.num_labels)\r\n\r\n self.init_weights()\r\n\r\n @add_start_docstrings_to_model_forward(BERT_INPUTS_DOCSTRING.format(\"batch_size, sequence_length\"))\r\n @add_code_sample_docstrings(\r\n tokenizer_class=_TOKENIZER_FOR_DOC,\r\n checkpoint=\"bert-base-uncased\",\r\n output_type=TokenClassifierOutput,\r\n config_class=_CONFIG_FOR_DOC,\r\n )\r\n def forward(\r\n self,\r\n input_ids=None,\r\n attention_mask=None,\r\n token_type_ids=None,\r\n position_ids=None,\r\n head_mask=None,\r\n inputs_embeds=None,\r\n labels=None,\r\n output_attentions=None,\r\n output_hidden_states=None,\r\n return_dict=None,\r\n ):\r\n r\"\"\"\r\n labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):\r\n Labels for computing the token classification loss. Indices should be in ``[0, ..., config.num_labels -\r\n 1]``.\r\n \"\"\"\r\n return_dict = return_dict if return_dict is not None else self.config.use_return_dict\r\n\r\n outputs = self.bert(\r\n input_ids,\r\n attention_mask=attention_mask,\r\n token_type_ids=token_type_ids,\r\n position_ids=position_ids,\r\n head_mask=head_mask,\r\n inputs_embeds=inputs_embeds,\r\n output_attentions=output_attentions,\r\n output_hidden_states=output_hidden_states,\r\n return_dict=return_dict,\r\n )\r\n\r\n sequence_output = outputs[0]\r\n\r\n sequence_output = self.dropout(sequence_output)\r\n logits = self.classifier(sequence_output)\r\n\r\n loss = None\r\n if labels is not None:\r\n weights = torch.tensor([0.6, 0.3, 0.1])\r\n loss_fct = CrossEntropyLoss(weights=weights)\r\n # Only keep active parts of the loss\r\n if attention_mask is not None:\r\n active_loss = attention_mask.view(-1) == 1\r\n active_logits = logits.view(-1, self.num_labels)\r\n active_labels = torch.where(\r\n active_loss, labels.view(-1), torch.tensor(loss_fct.ignore_index).type_as(labels)\r\n )\r\n loss = loss_fct(active_logits, active_labels)\r\n else:\r\n loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))\r\n\r\n if not return_dict:\r\n output = (logits,) + outputs[2:]\r\n return ((loss,) + output) if loss is not None else output\r\n\r\n return TokenClassifierOutput(\r\n loss=loss,\r\n logits=logits,\r\n hidden_states=outputs.hidden_states,\r\n attentions=outputs.attentions,\r\n )\r\n```\r\n",
"@NielsRogge \r\nYou are right. I had done exactly this in my local (huggingface) transformers codebase.\r\nWorked as expected.\r\n\r\nI think this would be a useful feature if huggingface models come with it.",
"Oh ok so you want this to be an added feature. Not sure if this is possible. @LysandreJik what do you think?",
"Hi, thanks for opening an issue! The losses in the models are not made to be completely customizable, but to be the most common loss used in most cases; we favor simplicity here.\r\n\r\nThis is because defining your custom loss in a PyTorch model is very simple: when you do not pass the labels to your model, then you retrieve the model logits. You can then define a loss (and customize it as you wish!) and compute its value using these logits and your labels.\r\n\r\nHowever, this is not the first time this feature has been requested, and we could probably come up with an implementation that wouldn't complexify the code-base too much. If we see more of this request we'll take a deeper look at how to implement it.\r\n\r\nHere's a past issue discussing the same/similar: https://github.com/huggingface/transformers/issues/7024\r\n\r\ncc @sgugger @patrickvonplaten",
"I'm not sure whether it's a good idea to add such functionality to `modeling_bert.py` - there are too many possibilities. I think it could very well be added to the examples though.",
"Yes. Unfortunately, there are too many of these possibilities. \r\nMost users who are familiar with PyTorch can anyway make necessary changes to their local codebase quite easily.\r\nThanks.",
"Also see [this example in the documentation](https://huggingface.co/transformers/main_classes/trainer.html) (scroll a tiny bit down to the first example showing a subclass of `Trainer`) on how to change just the loss computation while using a model with `Trainer`.",
"Hi everyone,\r\n\r\nI am a student and therefore not yet very familiar with the way issues report work on git, so I aplogize in advance if this is not the proper place to post this message.\r\nI've stumbled onto an error when using the aforementioned method for designing a custom loss function.\r\nMy code is the following\r\n```\r\nconfig = AutoConfig.from_pretrained(\"bert-base-cased\", num_labels=2, finetuning_task=\"SST-2\")\r\n\r\n# Test with modified trainer for weighted CrossEntropyLoss\r\n\r\nmodel = AutoModelForSequenceClassification.from_pretrained(\r\n \"dmis-lab/biobert-base-cased-v1.1\",\r\n from_tf=False,\r\n config=config)\r\n\r\nfrom torch import FloatTensor\r\n\r\nclassDistribution_raw = [97, 3]\r\nclassDistribution = [0.8, 0.2]\r\nnormedWeights = [1 - (x / sum(classDistribution)) for x in classDistribution]\r\nnormedWeights = FloatTensor(normedWeights).cuda()\r\n\r\nfrom torch.nn import CrossEntropyLoss\r\n\r\nclass MyTrainer(Trainer):\r\n def compute_loss(self, model, inputs, return_outputs=False):\r\n \r\n if \"labels\" in inputs:\r\n labels = inputs.pop(\"labels\")\r\n \r\n outputs = model(**inputs)\r\n logits = outputs.logits\r\n loss_function = CrossEntropyLoss(weight = normedWeights)\r\n\r\n if self.args.past_index >= 0:\r\n self._past = outputs[self.args.past_index]\r\n\r\n if labels is not None:\r\n loss = loss_function(logits, labels)\r\n else:\r\n # We don't use .loss here since the model may return tuples instead of ModelOutput.\r\n loss = outputs[\"loss\"] if isinstance(outputs, dict) else outputs[0]\r\n\r\n return (loss, outputs) if return_outputs else loss\r\n\r\ntrainer = MyTrainer(\r\n model=model,\r\n args=training_args,\r\n train_dataset=train_dataset,\r\n eval_dataset=eval_dataset,\r\n compute_metrics=compute_metrics_fn,\r\n tokenizer=tokenizer,\r\n )\r\n\r\n```\r\nAnd when I try to train the model using trainer.train(), i get the following error\r\n'NoneType' object has no attribute 'detach'\r\n\r\nThere is probably something wrong with the way I customized the loss function but I can't find where.\r\n\r\nBest regards,\r\nArthur\r\n",
"> Hi, thanks for opening an issue! The losses in the models are not made to be completely customizable, but to be the most common loss used in most cases; we favor simplicity here.\r\n> \r\n> This is because defining your custom loss in a PyTorch model is very simple: when you do not pass the labels to your model, then you retrieve the model logits. You can then define a loss (and customize it as you wish!) and compute its value using these logits and your labels.\r\n> \r\n> However, this is not the first time this feature has been requested, and we could probably come up with an implementation that wouldn't complexify the code-base too much. If we see more of this request we'll take a deeper look at how to implement it.\r\n> \r\n> Here's a past issue discussing the same/similar: #7024\r\n> \r\n> cc @sgugger @patrickvonplaten\r\n\r\n@sgugger @LysandreJik @NielsRogge \r\n\r\nI want to put a +1 on this feature request. Datasets with imbalanced datasets would benefit a lot from custom loss functions. And this shouldn't be a complex add (should just be one more kwarg?)."
] | 1,610 | 1,626 | 1,611 | NONE | null | # 🚀 Feature request
BertForTokenClassification models can compute cross entropy loss currently is only weighted. The option to have different weights for different classes can be useful in several use cases, including but not restricted to the problem of unbalanced output classes
## Motivation
Right now, although BertForTokenClassification models can compute cross entropy loss during the forward pass, there is no explicit way of weighting the different classes, which seems like a useful feature, as sequence tagging tasks often have unbalanced classes. I ran into the above problem during solving a academic problem. I looked at the code for the BertForTokenClassification model, and found that it should be quite easy to implement
## Your contribution
Not sure if I can help, coz I am not really familiar with the codebase. However, I can point out how and where to add code to implement the weighted loss very easily | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9625/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9625/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9624 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9624/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9624/comments | https://api.github.com/repos/huggingface/transformers/issues/9624/events | https://github.com/huggingface/transformers/pull/9624 | 787,119,905 | MDExOlB1bGxSZXF1ZXN0NTU1ODg1NTk3 | 9,624 | [wip] [deepspeed] AdamW is now supported by default | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2659267025,
"node_id": "MDU6TGFiZWwyNjU5MjY3MDI1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed",
"name": "DeepSpeed",
"color": "4D34F7",
"default": false,
"description": ""
}
] | closed | false | null | [] | [] | 1,610 | 1,615 | 1,615 | CONTRIBUTOR | null | This PR syncs with changes in DeepSpeed since `deepspeed==0.3.10` and can only be merged when `deepspeed==0.3.11` or higher is released. So it may sit here for a while aggregating adjustments
* [x] AdamW is now supported by default so we can remove the now redundant config options and comments https://github.com/microsoft/DeepSpeed/pull/670
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9624/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9624/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9624",
"html_url": "https://github.com/huggingface/transformers/pull/9624",
"diff_url": "https://github.com/huggingface/transformers/pull/9624.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9624.patch",
"merged_at": 1615585208000
} |
https://api.github.com/repos/huggingface/transformers/issues/9623 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9623/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9623/comments | https://api.github.com/repos/huggingface/transformers/issues/9623/events | https://github.com/huggingface/transformers/issues/9623 | 787,077,821 | MDU6SXNzdWU3ODcwNzc4MjE= | 9,623 | wandb breaks tests - importlib.util.find_spec-related under forked process | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"@sgugger, I think the culprit for the 2nd error, when I uninstalled wandb is:\r\n```\r\ndef is_wandb_available():\r\n if os.getenv(\"WANDB_DISABLED\"):\r\n return False\r\n return importlib.util.find_spec(\"wandb\") is not None\r\n```\r\nas it returns `True`, when it shouldn't since:\r\n```\r\nls -l /home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/wandb\r\nls: cannot access '/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/wandb': No such file or directory\r\n```\r\n\r\nYou can see it with any ddp test, so you don't need to install deepspeed or fairscale to see it, e.g. this fails too:\r\n```\r\npytest -sv examples/seq2seq/test_finetune_trainer.py::TestFinetuneTrainer::test_finetune_trainer_ddp\r\n```\r\nBut a single unforked process test works just fine:\r\n```\r\npytest -sv examples/seq2seq/test_finetune_trainer.py::TestFinetuneTrainer::test_finetune_trainer_dp\r\n```\r\n\r\n-----------------\r\n\r\nand then there is another problem which occurs with `wandb` installed. See the first error in OP.\r\n",
"But with `wandb` installed the 1st error I get with DDP too, w/o needing to fork a process in tests:\r\n\r\n```\r\npython -m torch.distributed.launch --nproc_per_node=2 ./finetune_trainer.py --model_name_or_path sshleifer/tiny-mbart --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_train --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size 4 --sortish_sampler --src_lang en_XX --task translation --tgt_lang ro_RO --val_max_target_length 128 --warmup_steps 500 --n_train 500\r\n[...]\r\n[INFO|integrations.py:521] 2021-01-16 20:47:40,853 >> Automatic Weights & Biases logging enabled, to disable set os.environ[\"WANDB_DISABLED\"] = \"true\"\r\nwandb: Currently logged in as: stason (use `wandb login --relogin` to force relogin)\r\n2021-01-16 20:47:42.440849: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0\r\nwandb: Tracking run with wandb version 0.10.14\r\nwandb: Syncing run output_dir\r\nwandb: ⭐️ View project at https://wandb.ai/stason/huggingface\r\nwandb: 🚀 View run at https://wandb.ai/stason/huggingface/runs/82q4zxt2\r\nwandb: Run data is saved locally in /mnt/nvme1/code/huggingface/transformers-master/examples/seq2seq/wandb/run-20210116_204741-82q4zxt2\r\nwandb: Run `wandb offline` to turn off syncing.\r\n 0%| | 0/63 [00:00<?, ?it/s]\r\n[...]\r\nTraining completed. Do not forget to share your model on huggingface.co/models =)\r\n\r\nTraceback (most recent call last):\r\n File \"./finetune_trainer.py\", line 367, in <module>\r\n main()\r\n File \"./finetune_trainer.py\", line 297, in main\r\n train_result = trainer.train(\r\n File \"/mnt/nvme1/code/huggingface/transformers-master/src/transformers/trainer.py\", line 998, in train\r\n self.control = self.callback_handler.on_train_end(self.args, self.state, self.control)\r\n File \"/mnt/nvme1/code/huggingface/transformers-master/src/transformers/trainer_callback.py\", line 342, in on_train_end\r\n return self.call_event(\"on_train_end\", args, state, control)\r\n File \"/mnt/nvme1/code/huggingface/transformers-master/src/transformers/trainer_callback.py\", line 377, in call_event\r\n result = getattr(callback, event)(\r\n File \"/mnt/nvme1/code/huggingface/transformers-master/src/transformers/integrations.py\", line 565, in on_train_end\r\n self._wandb.log({})\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/wandb/sdk/lib/preinit.py\", line 38, in preinit_wrapper\r\n raise wandb.Error(\"You must call wandb.init() before {}()\".format(name))\r\nwandb.errors.error.Error: You must call wandb.init() before wandb.log()\r\n 2021-01-16 20:47:46 | INFO | wandb.sdk.internal.internal | Internal process exited\r\n``` \r\n",
"I'm not sure I understand your first error. Could you give us more details? Are you saying that `importlib.from_spec` finds some weird \"wandb\" module but only in a distributed setting? I don't have wandb installed so I can't reproduce this at all.\r\n\r\nFor the last error, pinging @borisdayma ",
"I had a similar issue recently with python 3.8 but it worked with 3.7. It was due to a function from \"importlib\" which changed name. Is it the same?",
"@borisdayma, I have just installed python-3.7.9 and have the same issue there. Perhaps you had it working with python < 3.7.9?\r\nThe issue occurs with python-3.6.12 too.\r\n\r\n@sgugger yes, the problem occurs only when there is DDP. If I drop `-m torch.distributed.launch` the problem goes away so it has to do with forking/multi-processes. If you remember there was an Issue where someone also had the problem of using some transformers models because they were importing apex at load time and then it was crushing under `torch.mp` - this is definitely a totally different issue, but it's related that it has to do with multiproc.\r\n\r\nTo reproduce:\r\n\r\n```\r\npip install wandb\r\ncd examples/seq2seq\r\npython -m torch.distributed.launch --nproc_per_node=2 ./finetune_trainer.py --model_name_or_path sshleifer/tiny-mbart --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_train --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size 4 --sortish_sampler --src_lang en_XX --task translation --tgt_lang ro_RO --val_max_target_length 128 --warmup_steps 50 --n_train 50\r\n```\r\nwhich results in:\r\n```\r\nwandb.errors.error.Error: You must call wandb.init() before wandb.log()\r\n```\r\n\r\nIf you then remove wand:\r\n```\r\npip uninstall wandb -y\r\n```\r\nThe 2nd error happens:\r\n```\r\nAttributeError: module 'wandb' has no attribute 'ensure_configured'\r\n```\r\n\r\nThe full traces are in the OP.\r\n\r\nPlease let me know if you need any other info.\r\n",
"I am running into the same issue with DDP @stas00 has https://github.com/huggingface/transformers/issues/9623#issuecomment-761731532\r\nI believe this might be due to the call to `on_train_end`, which calls `wandb.log({})` on all processes, and not just on world process 0, while [`wandb.init` was called only on world process 0](https://github.com/huggingface/transformers/blob/897a24c869e2ac2ed44f17956f1009fd8f055f5e/src/transformers/integrations.py#L541-L564): https://github.com/huggingface/transformers/blob/897a24c869e2ac2ed44f17956f1009fd8f055f5e/src/transformers/integrations.py#L586",
"Interesting, can you check it solves the issue on your side @tristandeleu ?\r\nIf so I'll be happy to make a PR.",
"It does work for me when I replace it with\r\n```python\r\nif state.is_world_process_zero:\r\n self._wandb.log({})\r\n```\r\n\r\nThere is also another thing I ran into at the same time: `_log_model` was not initialized on processes other than world 0, making the following check fail because it didn't know `self._log_model`. Adding `self._log_model = False` to `__init__` solved the issue.\r\n\r\nEDIT: This solves the issue with DDP though, I don't know if it also solves the original issue https://github.com/huggingface/transformers/issues/9623#issue-787077821",
"Don't hesitate to suggest a PR with your fix @tristandeleu ",
"> It does work for me when I replace it with\r\n> \r\n> ```python\r\n> if state.is_world_process_zero:\r\n> self._wandb.log({})\r\n\r\n\r\n> ```\r\n> \r\n> There is also another thing I ran into at the same time: `_log_model` was not initialized on processes other than world 0, making the following check fail because it didn't know `self._log_model`. Adding `self._log_model = False` to `__init__` solved the issue.\r\n> \r\n> EDIT: This solves the issue with DDP though, I don't know if it also solves the original issue [#9623 (comment)](https://github.com/huggingface/transformers/issues/9623#issue-787077821)\r\n\r\nI had the same problem. and I just use > if state.is_world_process_zero: self._wandb.log({}), forget self._log_model = False. Thanks !!!",
"> It does work for me when I replace it with\r\n> \r\n> ```python\r\n> if state.is_world_process_zero:\r\n> self._wandb.log({})\r\n> ```\r\n> \r\n> There is also another thing I ran into at the same time: `_log_model` was not initialized on processes other than world 0, making the following check fail because it didn't know `self._log_model`. Adding `self._log_model = False` to `__init__` solved the issue.\r\n> \r\n> EDIT: This solves the issue with DDP though, I don't know if it also solves the original issue [#9623 (comment)](https://github.com/huggingface/transformers/issues/9623#issue-787077821)\r\n\r\nEven with revising these codes, the program(with TPU) doesn't seem to stop at the end",
"@lkk12014402 can you confirm it still happens with latest HF master branch?\r\nIf so do you have a reproducible example you could share?",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,610 | 1,614 | 1,614 | CONTRIBUTOR | null | This has to do with a forked process environment:
I was running:
```
pytest -sv examples/seq2seq/test_finetune_trainer.py -k deepspeed
```
and was getting:
```
stderr: Traceback (most recent call last):
stderr: File "/mnt/nvme1/code/huggingface/transformers-ds-optim-fix/examples/seq2seq/finetune_trainer.py", line 367, in <module>
stderr: main()
stderr: File "/mnt/nvme1/code/huggingface/transformers-ds-optim-fix/examples/seq2seq/finetune_trainer.py", line 297, in main
stderr: train_result = trainer.train(
stderr: File "/mnt/nvme1/code/huggingface/transformers-ds-optim-fix/src/transformers/trainer.py", line 998, in train
stderr: self.control = self.callback_handler.on_train_end(self.args, self.state, self.control)
stderr: File "/mnt/nvme1/code/huggingface/transformers-ds-optim-fix/src/transformers/trainer_callback.py", line 342, in on_train_end
stderr: return self.call_event("on_train_end", args, state, control)
stderr: File "/mnt/nvme1/code/huggingface/transformers-ds-optim-fix/src/transformers/trainer_callback.py", line 377, in call_event
result = getattr(callback, event)(
stderr: File "/mnt/nvme1/code/huggingface/transformers-ds-optim-fix/src/transformers/integrations.py", line 565, in on_train_end
100%|██████████| 1/1 [00:00<00:00, 1.88it/s] self._wandb.log({})
stderr: File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/wandb/sdk/lib/preinit.py", line 37, in preinit_wrapper
stderr: raise wandb.Error("You must call wandb.init() before {}()".format(name))
stderr: wandb.errors.error.Error: You must call wandb.init() before wandb.log()
stderr: 2021-01-15 09:38:11 | INFO | wandb.sdk.internal.internal | Internal process exited
```
I tried to remove `wandb` and while `pip uninstall wandb` worked, wandb left code behind and I had to remove it manually:
```
rm -r /home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/wandb
```
But the problem continued without having any wandb installed:
```
stderr: Traceback (most recent call last):
stderr: File "/mnt/nvme1/code/huggingface/transformers-ds-optim-fix/examples/seq2seq/finetune_trainer.py", line 367, in <module>
stderr: main()
stderr: File "/mnt/nvme1/code/huggingface/transformers-ds-optim-fix/examples/seq2seq/finetune_trainer.py", line 282, in main
stderr: trainer = Seq2SeqTrainer(
stderr: File "/mnt/nvme1/code/huggingface/transformers-ds-optim-fix/src/transformers/trainer.py", line 304, in __init__
stderr: self.callback_handler = CallbackHandler(
stderr: File "/mnt/nvme1/code/huggingface/transformers-ds-optim-fix/src/transformers/trainer_callback.py", line 282, in __init__
stderr: self.add_callback(cb)
stderr: File "/mnt/nvme1/code/huggingface/transformers-ds-optim-fix/src/transformers/trainer_callback.py", line 299, in add_callback
stderr: cb = callback() if isinstance(callback, type) else callback
stderr: File "/mnt/nvme1/code/huggingface/transformers-ds-optim-fix/src/transformers/integrations.py", line 488, in __init__
stderr: wandb.ensure_configured()
stderr: AttributeError: module 'wandb' has no attribute 'ensure_configured'
```
The strange `stderr` prefix is from our multiprocess testing setup which requires special handling as pytest can't handle DDP and a like on its own.
The only way I was able to overcome this is with:
```
export WANDB_DISABLED=true
```
I'm on `transformers` master. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9623/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9623/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9622 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9622/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9622/comments | https://api.github.com/repos/huggingface/transformers/issues/9622/events | https://github.com/huggingface/transformers/pull/9622 | 787,074,546 | MDExOlB1bGxSZXF1ZXN0NTU1ODQ2NzM3 | 9,622 | [deepspeed] --gradient_accumulation_steps fix | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2659267025,
"node_id": "MDU6TGFiZWwyNjU5MjY3MDI1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed",
"name": "DeepSpeed",
"color": "4D34F7",
"default": false,
"description": ""
}
] | closed | false | null | [] | [] | 1,610 | 1,610 | 1,610 | CONTRIBUTOR | null | This PR fixes deepspeed integration to run `self.deepspeed.step()` instead of `optimizer.step()` + adds test. As it was failing when `--gradient_accumulation_steps 2` was added.
Thank you @jncasey for detecting this bug in https://github.com/microsoft/DeepSpeed/issues/671
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9622/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9622/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9622",
"html_url": "https://github.com/huggingface/transformers/pull/9622",
"diff_url": "https://github.com/huggingface/transformers/pull/9622.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9622.patch",
"merged_at": 1610734347000
} |
https://api.github.com/repos/huggingface/transformers/issues/9621 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9621/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9621/comments | https://api.github.com/repos/huggingface/transformers/issues/9621/events | https://github.com/huggingface/transformers/pull/9621 | 787,039,388 | MDExOlB1bGxSZXF1ZXN0NTU1ODE3NzEw | 9,621 | Remove duplicated extras["retrieval"] | {
"login": "n1t0",
"id": 1217986,
"node_id": "MDQ6VXNlcjEyMTc5ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1217986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/n1t0",
"html_url": "https://github.com/n1t0",
"followers_url": "https://api.github.com/users/n1t0/followers",
"following_url": "https://api.github.com/users/n1t0/following{/other_user}",
"gists_url": "https://api.github.com/users/n1t0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/n1t0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/n1t0/subscriptions",
"organizations_url": "https://api.github.com/users/n1t0/orgs",
"repos_url": "https://api.github.com/users/n1t0/repos",
"events_url": "https://api.github.com/users/n1t0/events{/privacy}",
"received_events_url": "https://api.github.com/users/n1t0/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,610 | 1,610 | 1,610 | MEMBER | null | The `extras["retrieval"]` is defined a few lines above as:
https://github.com/huggingface/transformers/blob/28b26013abea3a49afeb46d36993a568ec98f39e/setup.py#L217-L222
and then it seems to be overridden just below, probably linking to `faiss-cpu` being included even on windows.
This PR removes the second assignment.
cc @LysandreJik @sgugger @stas00 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9621/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9621/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9621",
"html_url": "https://github.com/huggingface/transformers/pull/9621",
"diff_url": "https://github.com/huggingface/transformers/pull/9621.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9621.patch",
"merged_at": 1610961862000
} |
https://api.github.com/repos/huggingface/transformers/issues/9620 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9620/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9620/comments | https://api.github.com/repos/huggingface/transformers/issues/9620/events | https://github.com/huggingface/transformers/issues/9620 | 787,023,836 | MDU6SXNzdWU3ODcwMjM4MzY= | 9,620 | SQuAD 2.0 metric not supported | {
"login": "yonatanbitton",
"id": 26148975,
"node_id": "MDQ6VXNlcjI2MTQ4OTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/26148975?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yonatanbitton",
"html_url": "https://github.com/yonatanbitton",
"followers_url": "https://api.github.com/users/yonatanbitton/followers",
"following_url": "https://api.github.com/users/yonatanbitton/following{/other_user}",
"gists_url": "https://api.github.com/users/yonatanbitton/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yonatanbitton/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonatanbitton/subscriptions",
"organizations_url": "https://api.github.com/users/yonatanbitton/orgs",
"repos_url": "https://api.github.com/users/yonatanbitton/repos",
"events_url": "https://api.github.com/users/yonatanbitton/events{/privacy}",
"received_events_url": "https://api.github.com/users/yonatanbitton/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@sgugger would know about this TODO; I think the fix has landed in `datasets`, right?",
"Yes, this should be fixed directly from `datasets` now, will update the script this afternoon."
] | 1,610 | 1,611 | 1,611 | NONE | null | Hello.
I'm trying to run the official `run_qa.py` code for SQuAD 2.0.
You have an open TODO here that is causing a bug: https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_qa.py#L436
I would like to know what is the status of this TODO, and if it is going to be updated, or is there a way around it.
This is the current code:
```python
current_dir = os.path.sep.join(os.path.join(__file__).split(os.path.sep)[:-1])
metric = load_metric(os.path.join(current_dir, "squad_v2_local") if data_args.version_2_with_negative else "squad")
```
I receive:
```
FileNotFoundError: Couldn't find file locally at .../squad_v2_local/squad_v2_local.py,
```
I've tried to change it to:
```python
metric = load_metric("squad_v2" if data_args.version_2_with_negative else "squad")
```
But this is the stacktrace I receive:
```
Traceback (most recent call last):
File "/data/users/yonatab/transformers_pip/QA/run_qa_val_more_valueable.py", line 557, in <module>
main()
File "/data/users/yonatab/transformers_pip/QA/run_qa_val_more_valueable.py", line 538, in main
results = trainer.evaluate()
File "/data/users/yonatab/transformers_pip/QA/trainer_qa.py", line 63, in evaluate
metrics = self.compute_metrics(eval_preds)
File "/data/users/yonatab/transformers_pip/QA/run_qa_val_more_valueable.py", line 499, in compute_metrics
return metric.compute(predictions=p.predictions, references=p.label_ids)
File "/data/users/yonatab/transformers_pip/trans_pip/lib/python3.6/site-packages/datasets/metric.py", line 398, in compute
output = self._compute(predictions=predictions, references=references, **kwargs)
File "/home/ec2-user/.cache/huggingface/modules/datasets_modules/metrics/squad_v2/7529efd518b03f775290694e7b797412cb2253e90b4f843af83cf7434cccb3a8/squad_v2.py", line 108, in _compute
exact_raw, f1_raw = get_raw_scores(dataset, predictions)
File "/home/ec2-user/.cache/huggingface/modules/datasets_modules/metrics/squad_v2/7529efd518b03f775290694e7b797412cb2253e90b4f843af83cf7434cccb3a8/evaluate.py", line 111, in get_raw_scores
gold_answers = [a["text"] for a in qa["answers"] if normalize_answer(a["text"])]
File "/home/ec2-user/.cache/huggingface/modules/datasets_modules/metrics/squad_v2/7529efd518b03f775290694e7b797412cb2253e90b4f843af83cf7434cccb3a8/evaluate.py", line 111, in <listcomp>
gold_answers = [a["text"] for a in qa["answers"] if normalize_answer(a["text"])]
TypeError: string indices must be integers
100%|███████████████████████████████████████████| 13/13 [00:05<00:00, 2.51it/s]
```
How can I solve it?
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9620/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9620/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9619 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9619/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9619/comments | https://api.github.com/repos/huggingface/transformers/issues/9619/events | https://github.com/huggingface/transformers/issues/9619 | 786,865,584 | MDU6SXNzdWU3ODY4NjU1ODQ= | 9,619 | Train robertatokenizer failed due to pad token not found | {
"login": "pjuangph",
"id": 9328717,
"node_id": "MDQ6VXNlcjkzMjg3MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/9328717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pjuangph",
"html_url": "https://github.com/pjuangph",
"followers_url": "https://api.github.com/users/pjuangph/followers",
"following_url": "https://api.github.com/users/pjuangph/following{/other_user}",
"gists_url": "https://api.github.com/users/pjuangph/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pjuangph/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pjuangph/subscriptions",
"organizations_url": "https://api.github.com/users/pjuangph/orgs",
"repos_url": "https://api.github.com/users/pjuangph/repos",
"events_url": "https://api.github.com/users/pjuangph/events{/privacy}",
"received_events_url": "https://api.github.com/users/pjuangph/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Roberta was train on a causal language model objective, therefore the `LineByLineDataset` is not adapted to train it: it considers one line for one text when the roberta objective is to have several lines concatenated and separated by the sep token until it reaches the block size, to avoid padding. \r\n\r\nYou need to use a different dataset for this. You should also check the new [`run_mlm` script](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py) that offers both options.",
"Hi, I had the same issue, here are the workarounds I used\r\nPlatform: Ubuntu 18\r\nPython version: 3.7.9\r\nPyTorch version (GPU): 1.7.1 \r\ncuda11\r\n\r\n- save your tokenizer with `save_model()`, instead of `save()`, will save a `merges.json` and a` vocab.json`. \r\n\r\n` tokenizer.save_model('models/BPEtokenizer')`\r\n\r\n- you'll need `config.json`, a `tokenizer_config.json` and a `special_tokens_map.json` files in your tokenizer repo, you can get them from the base model you want to use your tokenizer with, i.e. just quickly run the `run_mlm` script with 2 batches to get them and add them in your tokenizer repo.\r\n\r\nI'm not sure the config.json is actually loaded, as it is the model config and not the tokenizer's, but the script wants is to accept your tokenizer path.\r\ntokenizer repo should contain:\r\n```\r\n|__config.json\r\n|__merges.txt\r\n|__special_tokens_map.json\r\n|__tokenizer_config.json\r\n|__vocab.json\r\n```\r\n\r\n- in `tokenizer_config.json`, change the ` name_or_path:\"roberta-base\"` to `model_type: \"roberta\"`\r\n\r\n- then train your model running the mlm script with your options and\r\n\r\n`-- tokenizer_name ./models/BPEtokenizer`\r\n",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,610 | 1,614 | 1,614 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Windows
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7? 3080 RTX
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
tokenizers: @mfuntowicz
Trainer: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...): Roberta
The problem arises when using:
* [x] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
My first step is to download some of the esperberto data from the sites mentioned in this tutorial https://huggingface.co/blog/how-to-train
Few issues
1. Regarding the tutorial, they make you train a ByteLevelBPETokenizer but this is never used in the training code. The training code isn't even in the tutorial 👎
2. I came across this https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb
It looks good except whatever ByteLevelBPETokenizer is never used in the training process so I tried to find a way to use it. I tried two approaches both result in the same outcome. I tried using the BPE and not ByteLevelBPETokenizer. I have no clue what is the best practice or why neither of them are working.
This is my code to do the tokenizer. You can uncomment whatever
```
#! pip install tokenizers
#%% Import Statements
from pathlib import Path
from transformers import RobertaTokenizer
from tokenizers import Tokenizer
from tokenizers.trainers import BpeTrainer
from tokenizers.models import BPE
from tokenizers.pre_tokenizers import Whitespace
# from tokenizers import ByteLevelBPETokenizer
# from tokenizers.implementations import ByteLevelBPETokenizer
from tokenizers.processors import BertProcessing
import os.path as osp
#%% Train Tokenizer
if (not osp.exists('models/BPEtokenizer.json')):
paths = [str(x) for x in Path("./eo_data/").glob("**/*.txt")]
# Initialize a tokenizer
# tokenizer = ByteLevelBPETokenizer()
# # Customize training
# tokenizer.train(files=paths, vocab_size=52000, min_frequency=3, special_tokens=[
# "<s>",
# "<pad>",
# "</s>",
# "<unk>",
# "<mask>"
# ])
tokenizer = Tokenizer(BPE())
trainer = BpeTrainer(vocab_size=52000,min_frequency=3, special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"])
tokenizer.pre_tokenizer = Whitespace()
tokenizer.train(trainer, paths)
# Save files to disk
tokenizer.save('models/BPEtokenizer.json')
#%% Tokenize
tokenizer = Tokenizer.from_file('models/BPEtokenizer.json')
# tokenizer._tokenizer.post_processor = BertProcessing(
# ("</s>", tokenizer.token_to_id("</s>")),
# ("<s>", tokenizer.token_to_id("<s>")),
# )
# tokenizer.enable_truncation(max_length=512)
output = tokenizer.encode("Mi estas Julien.😁")
print(output.tokens)
print(output.ids)
# Encoding(num_tokens=7, ...)
# tokens: ['<s>', 'Mi', 'Ġestas', 'ĠJuli', 'en', '.', '</s>']
```
This is my code to do training
```
import torch
from transformers import RobertaConfig
from transformers import RobertaTokenizerFast
from transformers import RobertaForMaskedLM
from transformers import LineByLineTextDataset
from transformers import DataCollatorForLanguageModeling
from pathlib import Path
from transformers import DataCollatorForLanguageModeling
from tokenizers import ByteLevelBPETokenizer
from transformers import PreTrainedTokenizerFast
from tokenizers import Tokenizer
# Tutorial from https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb#scrollTo=BzMqR-dzF4Ro
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# tokenizer = RobertaTokenizerFast.from_pretrained("./EsperBERTo", max_len=512)
# tokenizer = ByteLevelBPETokenizer("models/esperberto-vocab.json","models/esperberto-merges.txt") # ? This actually doesn't work. You will get an error saying tokenizer is not callable.
tokenizer = PreTrainedTokenizerFast(tokenizer_file='models/BPEtokenizer.json')
# tokenizer = Tokenizer.from_file('models/BPEtokenizer.json')
mlm=False
config = RobertaConfig(
vocab_size=52_000,
max_position_embeddings=514,
num_attention_heads=12,
num_hidden_layers=6,
type_vocab_size=1,
)
# Training from scratch
model = RobertaForMaskedLM(config=config)
model.num_parameters()
paths = [str(x) for x in Path("eo_data/").glob("**/*.txt")]
# Build the dataset
dataset = LineByLineTextDataset(tokenizer=tokenizer, file_path="eo_data/shuff-orig/eo/eo.txt",block_size=128)
# mlm = mask modeling language
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer, mlm=mlm, mlm_probability=0.15
)
from transformers import Trainer, TrainingArguments
training_args = TrainingArguments(
output_dir="models/EsperBERTo-small",
overwrite_output_dir=True,
num_train_epochs=1000,
per_gpu_train_batch_size=64,
save_steps=10_000,
save_total_limit=2,
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=dataset)
trainer.train()
```
I keep getting the error `Asking to pad but the tokenizer does not have a padding token. Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`.`
Also I couldn't set mlm=True either. Do you have any good tutorials on how to train your own set of data using Roberta?
If anyone wants to pull my files you can grab them and the dataset here
https://1drv.ms/u/s!Apa0_j-AivqTpqNz7r0M3NNhCm2W_A?e=BMLvqv
If you guys resolve this then I'll update and post a public google colab
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9619/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9619/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9618 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9618/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9618/comments | https://api.github.com/repos/huggingface/transformers/issues/9618/events | https://github.com/huggingface/transformers/issues/9618 | 786,855,831 | MDU6SXNzdWU3ODY4NTU4MzE= | 9,618 | Text generation pipeline - output_scores parameter | {
"login": "bala1802",
"id": 22103095,
"node_id": "MDQ6VXNlcjIyMTAzMDk1",
"avatar_url": "https://avatars.githubusercontent.com/u/22103095?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bala1802",
"html_url": "https://github.com/bala1802",
"followers_url": "https://api.github.com/users/bala1802/followers",
"following_url": "https://api.github.com/users/bala1802/following{/other_user}",
"gists_url": "https://api.github.com/users/bala1802/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bala1802/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bala1802/subscriptions",
"organizations_url": "https://api.github.com/users/bala1802/orgs",
"repos_url": "https://api.github.com/users/bala1802/repos",
"events_url": "https://api.github.com/users/bala1802/events{/privacy}",
"received_events_url": "https://api.github.com/users/bala1802/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,610 | 1,614 | 1,614 | NONE | null | In `text-generation` pipeline, I am looking for a parameter which calculates the confidence score of the generated text. Source: [here](https://huggingface.co/transformers/main_classes/pipelines.html#transformers.TextGenerationPipeline)
I am assuming that, `output_scores` (from [here](https://huggingface.co/transformers/main_classes/configuration.html#transformers.PretrainedConfig)) parameter is not returned while prediction,
**Code**:
`predictedText = pipeline('text-generation',model=checkpoint_path, tokenizer=gpt2_tokenizer, config={'max_length':20, 'output_scores':True})`
`predictedText('This is a ')`
**Output**:
`Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
[{'generated_text': 'This is a Generated Text'}]`
In the output, I am looking for a confidence score of the predicted text to be displayed
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9618/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9618/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9617 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9617/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9617/comments | https://api.github.com/repos/huggingface/transformers/issues/9617/events | https://github.com/huggingface/transformers/issues/9617 | 786,798,712 | MDU6SXNzdWU3ODY3OTg3MTI= | 9,617 | Error in GPT2 while using gradient checkpointing. | {
"login": "devrimcavusoglu",
"id": 46989091,
"node_id": "MDQ6VXNlcjQ2OTg5MDkx",
"avatar_url": "https://avatars.githubusercontent.com/u/46989091?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/devrimcavusoglu",
"html_url": "https://github.com/devrimcavusoglu",
"followers_url": "https://api.github.com/users/devrimcavusoglu/followers",
"following_url": "https://api.github.com/users/devrimcavusoglu/following{/other_user}",
"gists_url": "https://api.github.com/users/devrimcavusoglu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/devrimcavusoglu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/devrimcavusoglu/subscriptions",
"organizations_url": "https://api.github.com/users/devrimcavusoglu/orgs",
"repos_url": "https://api.github.com/users/devrimcavusoglu/repos",
"events_url": "https://api.github.com/users/devrimcavusoglu/events{/privacy}",
"received_events_url": "https://api.github.com/users/devrimcavusoglu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hitting this issue as well."
] | 1,610 | 1,610 | 1,610 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.0
- Platform: Linux | 5.4.0-60-generic | 18.04.1-Ubuntu SMP | x86_64
- Python version: 3.7.7
- PyTorch version (GPU?): 1.7.1
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
@LysandreJik
## Information
Model I am using: GPT2
The problem arises when using:
* GPT2LMHeadModel with config `gradient_checkpointing: True`
When using GPT2 pretrained model, with the latest releases (4.x), `gpt2_modeling.py` fails due to the behavior arising from pytorch. It was due to the fact that `torch.utils.checkpoint.checkpoint` import is malfunctioning, see [this](https://discuss.pytorch.org/t/attributeerror-module-torch-utils-has-no-attribute-checkpoint/101543) discussion, but I tried with python3.8 as well, and the problem still occured. When I observe the other scripts for modeling on different models (like BERT, etc.) the import statement for `checkpoint` is handled successfully, but GPT2 script fails. It is discussed that the problem arises due to the import behaviour of python.
```
File
"/home/username/.miniconda3/envs/prj/lib/python3.7/site-packages/torch/nn/modules/module.py", line
727, in _call_impl
result = self.forward(*input, **kwargs)
File
"/home/username/.miniconda3/envs/prj/lib/python3.7/site-packages/transformers/models/gpt2/modeling_gpt2.py", line
901, in forward
return_dict = return_dict,
File
"/home/username/.miniconda3/envs/prj/lib/python3.7/site-packages/torch/nn/modules/module.py", line
727, in _call_impl
result = self.forward(*input, **kwargs)
File
"/home/username/.miniconda3/envs/prj/lib/python3.7/site-packages/transformers/models/gpt2/modeling_gpt2.py", line
728, in forward
outputs = torch.utils.checkpoint.checkpoint(
AttributeError: module 'torch.utils' has no attribute 'checkpoint'
```
## Suggestion
in `modeling_gpt2.py` add this import `import torch.utils.checkpoint`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9617/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9617/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9616 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9616/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9616/comments | https://api.github.com/repos/huggingface/transformers/issues/9616/events | https://github.com/huggingface/transformers/pull/9616 | 786,767,099 | MDExOlB1bGxSZXF1ZXN0NTU1NTkxODgx | 9,616 | Fix label datatype in TF Trainer | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I agree with Sylvain that while this is not tested, it's hard to recommend using it."
] | 1,610 | 1,611 | 1,611 | CONTRIBUTOR | null | # What does this PR do?
This PR fixes the case where `labels` can be either a `dict` or a `tf.Tensor` when doing gradient accumulation.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9616/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9616/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9616",
"html_url": "https://github.com/huggingface/transformers/pull/9616",
"diff_url": "https://github.com/huggingface/transformers/pull/9616.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9616.patch",
"merged_at": 1611140880000
} |
https://api.github.com/repos/huggingface/transformers/issues/9615 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9615/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9615/comments | https://api.github.com/repos/huggingface/transformers/issues/9615/events | https://github.com/huggingface/transformers/pull/9615 | 786,747,866 | MDExOlB1bGxSZXF1ZXN0NTU1NTc2MTEx | 9,615 | Ignore lm_head decoder bias warning | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Is it normal that this bias is missing?",
"By answering your question I realized this could be upstreamed directly in the RoBERTa model, which I just did.\r\n\r\nYou can take a look at my answer this morning to a similar question: https://github.com/huggingface/transformers/issues/6193#issuecomment-760797867.\r\n\r\nXLM-R is an alias of the RoBERTa model, hence why they both need this."
] | 1,610 | 1,610 | 1,610 | MEMBER | null | Removes the warning that's currently happening when importing `xlm-roberta-base` with any of the XLM-R models.
Closes https://github.com/huggingface/transformers/issues/9579 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9615/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9615/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9615",
"html_url": "https://github.com/huggingface/transformers/pull/9615",
"diff_url": "https://github.com/huggingface/transformers/pull/9615.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9615.patch",
"merged_at": 1610721623000
} |
https://api.github.com/repos/huggingface/transformers/issues/9614 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9614/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9614/comments | https://api.github.com/repos/huggingface/transformers/issues/9614/events | https://github.com/huggingface/transformers/issues/9614 | 786,670,969 | MDU6SXNzdWU3ODY2NzA5Njk= | 9,614 | Conditional branching logic in modeling_tf_xlnet.py causing error with TF Graph | {
"login": "ANarayan",
"id": 5660075,
"node_id": "MDQ6VXNlcjU2NjAwNzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5660075?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ANarayan",
"html_url": "https://github.com/ANarayan",
"followers_url": "https://api.github.com/users/ANarayan/followers",
"following_url": "https://api.github.com/users/ANarayan/following{/other_user}",
"gists_url": "https://api.github.com/users/ANarayan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ANarayan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ANarayan/subscriptions",
"organizations_url": "https://api.github.com/users/ANarayan/orgs",
"repos_url": "https://api.github.com/users/ANarayan/repos",
"events_url": "https://api.github.com/users/ANarayan/events{/privacy}",
"received_events_url": "https://api.github.com/users/ANarayan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,610 | 1,614 | 1,614 | NONE | null | Hi @TevenLeScao ,
I am encountering an error when running the TFXLNet model inside of a tensorflow graph.
Here is some code to reproduce the issue:
```
from transformers import XLNetTokenizer, TFXLNetModel
import tensorflow as tf
tokenizer = XLNetTokenizer.from_pretrained('xlnet-base-cased')
model = TFXLNetModel.from_pretrained('xlnet-base-cased')
@tf.function
def train_step(inputs, mask, token_type_ids):
with tf.GradientTape() as tape:
a = model({
"input_ids": inputs,
"training": True,
"attention_mask": mask,
"token_type_ids": token_type_ids,
})
train_step(inputs, mask, token_type_ids)
```
The error seems to be caused by L765-L768 in modeling_tf_xlnet.py [here](https://github.com/huggingface/transformers/blob/82498cbc37d5c15520c7bddde5d804c804eee498/src/transformers/models/xlnet/modeling_tf_xlnet.py#L765)
Here is the error message:
> TypeError: in user code:
> <ipython-input-41-b79f96ef9347>:4 train_step *
> a = model({
> /anaconda2/envs/dialogue/lib/python3.7/site-packages/transformers/models/xlnet/modeling_tf_xlnet.py:1189 call *
> outputs = self.transformer(
> /anaconda2/envs/dialogue/lib/python3.7/site-packages/transformers/models/xlnet/modeling_tf_xlnet.py:753 call *
> if inputs["use_mems"]:
> /anaconda2/envs/dialogue/lib/python3.7/site-packages/tensorflow/python/autograph/operators/control_flow.py:951 if_stmt
> _tf_if_stmt(cond, body, orelse, get_state, set_state, symbol_names, nouts)
> /anaconda2/envs/dialogue/lib/python3.7/site-packages/tensorflow/python/autograph/operators/control_flow.py:996 _tf_if_stmt
> cond, aug_body, aug_orelse, strict=True)
> /anaconda2/envs/dialogue/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py:201 wrapper
> return target(*args, **kwargs)
> /anaconda2/envs/dialogue/lib/python3.7/site-packages/tensorflow/python/util/deprecation.py:507 new_func
> return func(*args, **kwargs)
> /anaconda2/envs/dialogue/lib/python3.7/site-packages/tensorflow/python/ops/control_flow_ops.py:1180 cond
> return cond_v2.cond_v2(pred, true_fn, false_fn, name)
> /anaconda2/envs/dialogue/lib/python3.7/site-packages/tensorflow/python/ops/cond_v2.py:92 cond_v2
> op_return_value=pred)
> /anaconda2/envs/dialogue/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py:986 func_graph_from_py_func
> func_outputs = python_func(*func_args, **func_kwargs)
> /anaconda2/envs/dialogue/lib/python3.7/site-packages/tensorflow/python/autograph/operators/control_flow.py:992 aug_orelse
> _verify_tf_cond_vars(new_body_vars_[0], new_orelse_vars, symbol_names)
> /anaconda2/envs/dialogue/lib/python3.7/site-packages/tensorflow/python/autograph/operators/control_flow.py:286 _verify_tf_cond_vars
> ' branches:\n\n{}'.format(name, str(e)))
> TypeError: 'new_mems' must have the same nested structure in the main and else branches:
> The two structures don't have the same nested structure.
> First structure: type=tuple str=(<tf.Tensor 'tfxl_net_model/transformer/cond_2/StopGradient:0' shape=(44, 18, 768) dtype=float32>,)
> Second structure: type=tuple str=()
> More specifically: The two structures don't have the same number of elements. First structure: type=tuple str=(<tf.Tensor 'tfxl_net_model/transformer/cond_2/StopGradient:0' shape=(44, 18, 768) dtype=float32>,). Second structure: type=tuple str=()
> Entire first structure:
> (.,)
> Entire second structure:
> () | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9614/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9614/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9613 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9613/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9613/comments | https://api.github.com/repos/huggingface/transformers/issues/9613/events | https://github.com/huggingface/transformers/pull/9613 | 786,656,690 | MDExOlB1bGxSZXF1ZXN0NTU1NDk5MTM3 | 9,613 | training_loss in TFTrainer | {
"login": "kiyoungkim1",
"id": 37245002,
"node_id": "MDQ6VXNlcjM3MjQ1MDAy",
"avatar_url": "https://avatars.githubusercontent.com/u/37245002?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kiyoungkim1",
"html_url": "https://github.com/kiyoungkim1",
"followers_url": "https://api.github.com/users/kiyoungkim1/followers",
"following_url": "https://api.github.com/users/kiyoungkim1/following{/other_user}",
"gists_url": "https://api.github.com/users/kiyoungkim1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kiyoungkim1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kiyoungkim1/subscriptions",
"organizations_url": "https://api.github.com/users/kiyoungkim1/orgs",
"repos_url": "https://api.github.com/users/kiyoungkim1/repos",
"events_url": "https://api.github.com/users/kiyoungkim1/events{/privacy}",
"received_events_url": "https://api.github.com/users/kiyoungkim1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Here the logs of a training with\r\n```\r\npython run_tf_glue.py --task_name mrpc --model_name_or_path bert-base-cased --output_dir model --num_train_epochs 4 --per_device_train_batch_size 16 --per_device_eval_batch_size 16 --do_train --do_eval --do_predict --logging_steps 10 --overwrite_output_dir --gradient_accumulation_steps 2\r\n```\r\n\r\n```\r\n[INFO|trainer_tf.py:522] 2021-01-15 10:27:59,116 >> ***** Running training *****\r\n[INFO|trainer_tf.py:523] 2021-01-15 10:27:59,123 >> Num examples = 3668\r\n[INFO|trainer_tf.py:525] 2021-01-15 10:27:59,124 >> Num Epochs = 4\r\n[INFO|trainer_tf.py:526] 2021-01-15 10:27:59,124 >> Instantaneous batch size per device = 16\r\n[INFO|trainer_tf.py:527] 2021-01-15 10:27:59,124 >> Total train batch size (w. parallel, distributed & accumulation) = 32\r\n[INFO|trainer_tf.py:530] 2021-01-15 10:27:59,125 >> Gradient Accumulation steps = 2\r\n[INFO|trainer_tf.py:531] 2021-01-15 10:27:59,136 >> Steps per epoch = 115\r\n[INFO|trainer_tf.py:532] 2021-01-15 10:27:59,137 >> Total optimization steps = 460\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:28:50,371 >> {'loss': 0.6347228, 'learning_rate': 4.891304e-05, 'epoch': 0.08695652173913043, 'step': 10}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:28:56,791 >> {'loss': 0.604829, 'learning_rate': 4.7826084e-05, 'epoch': 0.17391304347826086, 'step': 20}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:29:03,192 >> {'loss': 0.62615454, 'learning_rate': 4.673913e-05, 'epoch': 0.2608695652173913, 'step': 30}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:29:09,614 >> {'loss': 0.61436784, 'learning_rate': 4.5652174e-05, 'epoch': 0.34782608695652173, 'step': 40}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:29:16,163 >> {'loss': 0.60542804, 'learning_rate': 4.456522e-05, 'epoch': 0.43478260869565216, 'step': 50}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:29:22,633 >> {'loss': 0.60221016, 'learning_rate': 4.347826e-05, 'epoch': 0.5217391304347826, 'step': 60}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:29:29,129 >> {'loss': 0.59315145, 'learning_rate': 4.2391304e-05, 'epoch': 0.6086956521739131, 'step': 70}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:29:35,655 >> {'loss': 0.5896678, 'learning_rate': 4.1304345e-05, 'epoch': 0.6956521739130435, 'step': 80}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:29:42,209 >> {'loss': 0.5796127, 'learning_rate': 4.0217386e-05, 'epoch': 0.782608695652174, 'step': 90}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:29:48,779 >> {'loss': 0.5678522, 'learning_rate': 3.9130435e-05, 'epoch': 0.8695652173913043, 'step': 100}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:29:55,365 >> {'loss': 0.55807614, 'learning_rate': 3.8043476e-05, 'epoch': 0.9565217391304348, 'step': 110}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:30:04,348 >> {'loss': 0.32373077, 'learning_rate': 3.695652e-05, 'epoch': 1.0434782608695652, 'step': 120}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:30:10,920 >> {'loss': 0.3261666, 'learning_rate': 3.5869565e-05, 'epoch': 1.1304347826086956, 'step': 130}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:30:17,516 >> {'loss': 0.34052417, 'learning_rate': 3.478261e-05, 'epoch': 1.2173913043478262, 'step': 140}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:30:24,125 >> {'loss': 0.35018474, 'learning_rate': 3.369565e-05, 'epoch': 1.3043478260869565, 'step': 150}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:30:30,714 >> {'loss': 0.35887596, 'learning_rate': 3.260869e-05, 'epoch': 1.391304347826087, 'step': 160}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:30:37,303 >> {'loss': 0.34891757, 'learning_rate': 3.1521737e-05, 'epoch': 1.4782608695652173, 'step': 170}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:30:43,900 >> {'loss': 0.33256933, 'learning_rate': 3.0434781e-05, 'epoch': 1.5652173913043477, 'step': 180}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:30:50,481 >> {'loss': 0.32668048, 'learning_rate': 2.934782e-05, 'epoch': 1.6521739130434783, 'step': 190}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:30:57,079 >> {'loss': 0.31888676, 'learning_rate': 2.8260865e-05, 'epoch': 1.7391304347826086, 'step': 200}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:31:03,688 >> {'loss': 0.31276095, 'learning_rate': 2.7173912e-05, 'epoch': 1.8260869565217392, 'step': 210}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:31:10,284 >> {'loss': 0.30366346, 'learning_rate': 2.6086956e-05, 'epoch': 1.9130434782608696, 'step': 220}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:31:16,885 >> {'loss': 0.2903903, 'learning_rate': 2.5e-05, 'epoch': 2.0, 'step': 230}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:31:26,095 >> {'loss': 0.15675393, 'learning_rate': 2.3913042e-05, 'epoch': 2.0869565217391304, 'step': 240}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:31:32,671 >> {'loss': 0.14483282, 'learning_rate': 2.2826087e-05, 'epoch': 2.1739130434782608, 'step': 250}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:31:39,275 >> {'loss': 0.14147088, 'learning_rate': 2.173913e-05, 'epoch': 2.260869565217391, 'step': 260}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:31:45,866 >> {'loss': 0.13758971, 'learning_rate': 2.0652174e-05, 'epoch': 2.3478260869565215, 'step': 270}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:31:52,464 >> {'loss': 0.13357341, 'learning_rate': 1.9565217e-05, 'epoch': 2.4347826086956523, 'step': 280}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:31:59,049 >> {'loss': 0.12877393, 'learning_rate': 1.8478258e-05, 'epoch': 2.5217391304347827, 'step': 290}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:32:05,682 >> {'loss': 0.13753517, 'learning_rate': 1.7391301e-05, 'epoch': 2.608695652173913, 'step': 300}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:32:12,281 >> {'loss': 0.1319594, 'learning_rate': 1.6304344e-05, 'epoch': 2.6956521739130435, 'step': 310}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:32:18,883 >> {'loss': 0.12644322, 'learning_rate': 1.5217389e-05, 'epoch': 2.782608695652174, 'step': 320}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:32:25,472 >> {'loss': 0.12481367, 'learning_rate': 1.41304345e-05, 'epoch': 2.869565217391304, 'step': 330}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:32:32,082 >> {'loss': 0.12073966, 'learning_rate': 1.3043478e-05, 'epoch': 2.9565217391304346, 'step': 340}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:32:41,403 >> {'loss': 0.10288413, 'learning_rate': 1.1956521e-05, 'epoch': 3.0434782608695654, 'step': 350}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:32:47,955 >> {'loss': 0.09858045, 'learning_rate': 1.0869565e-05, 'epoch': 3.130434782608696, 'step': 360}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:32:54,532 >> {'loss': 0.07963112, 'learning_rate': 9.782609e-06, 'epoch': 3.217391304347826, 'step': 370}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:33:01,104 >> {'loss': 0.08428383, 'learning_rate': 8.6956525e-06, 'epoch': 3.3043478260869565, 'step': 380}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:33:07,684 >> {'loss': 0.0844244, 'learning_rate': 7.6086967e-06, 'epoch': 3.391304347826087, 'step': 390}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:33:14,284 >> {'loss': 0.08690852, 'learning_rate': 6.5217405e-06, 'epoch': 3.4782608695652173, 'step': 400}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:33:20,877 >> {'loss': 0.0832295, 'learning_rate': 5.434781e-06, 'epoch': 3.5652173913043477, 'step': 410}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:33:27,494 >> {'loss': 0.078029804, 'learning_rate': 4.3478244e-06, 'epoch': 3.6521739130434785, 'step': 420}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:33:34,095 >> {'loss': 0.079320244, 'learning_rate': 3.2608687e-06, 'epoch': 3.7391304347826084, 'step': 430}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:33:40,709 >> {'loss': 0.076877564, 'learning_rate': 2.1739122e-06, 'epoch': 3.8260869565217392, 'step': 440}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:33:47,324 >> {'loss': 0.07551385, 'learning_rate': 1.0869561e-06, 'epoch': 3.9130434782608696, 'step': 450}\r\n[INFO|trainer_tf.py:398] 2021-01-15 10:33:53,941 >> {'loss': 0.07157838, 'learning_rate': 0.0, 'epoch': 4.0, 'step': 460}\r\n```\r\n\r\nNothing seems wrong in the loss computation.\r\n\r\n```\r\neval_acc = 0.8518518518518519\r\neval_f1 = 0.8954248366013072\r\neval_acc_and_f1 = 0.8736383442265796\r\n```",
"@jplu \r\nYes, you are right, and I am wrong. \r\nMy dataset format was wrong (```labels``` in dataset for ```TFGPT2LMHead``` should be ```tensor```, but was ```dict``` yesterday). Sorry for the confusion.\r\n\r\nHowever, there is one problem though. ```training_loss``` is not properly calculated with successive training. Run ```run_tf_glue.py``` with ```save_steps```. I train 40 steps, and train again from that ckpt. Results are shown below, where loss increases and decreases.\r\n\r\n```\r\n[INFO|trainer_tf.py:398] 2021-01-15 16:19:09,011 >> {'loss': 0.10616198, 'learning_rate': 4.456522e-05, 'epoch': 0.43478260869565216, 'step': 50}\r\n[INFO|trainer_tf.py:398] 2021-01-15 16:19:20,271 >> {'loss': 0.17419913, 'learning_rate': 4.347826e-05, 'epoch': 0.5217391304347826, 'step': 60}\r\n[INFO|trainer_tf.py:398] 2021-01-15 16:19:33,935 >> {'loss': 0.2174806, 'learning_rate': 4.2391304e-05, 'epoch': 0.6086956521739131, 'step': 70}\r\n[INFO|trainer_tf.py:398] 2021-01-15 16:19:47,342 >> {'loss': 0.25698015, 'learning_rate': 4.1304345e-05, 'epoch': 0.6956521739130435, 'step': 80}\r\n[INFO|trainer_tf.py:398] 2021-01-15 16:20:04,633 >> {'loss': 0.28348362, 'learning_rate': 4.0217386e-05, 'epoch': 0.782608695652174, 'step': 90}\r\n[INFO|trainer_tf.py:398] 2021-01-15 16:20:22,000 >> {'loss': 0.29971126, 'learning_rate': 3.9130435e-05, 'epoch': 0.8695652173913043, 'step': 100}\r\n[INFO|trainer_tf.py:398] 2021-01-15 16:20:35,515 >> {'loss': 0.30674613, 'learning_rate': 3.8043476e-05, 'epoch': 0.9565217391304348, 'step': 110}\r\n[INFO|trainer_tf.py:398] 2021-01-15 16:20:57,368 >> {'loss': 0.3384542, 'learning_rate': 3.695652e-05, 'epoch': 1.0434782608695652, 'step': 120}\r\n[INFO|trainer_tf.py:398] 2021-01-15 16:21:10,066 >> {'loss': 0.25661522, 'learning_rate': 3.5869565e-05, 'epoch': 1.1304347826086956, 'step': 130}\r\n[INFO|trainer_tf.py:398] 2021-01-15 16:21:23,626 >> {'loss': 0.2714903, 'learning_rate': 3.478261e-05, 'epoch': 1.2173913043478262, 'step': 140}\r\n[INFO|trainer_tf.py:398] 2021-01-15 16:21:42,122 >> {'loss': 0.262019, 'learning_rate': 3.369565e-05, 'epoch': 1.3043478260869565, 'step': 150}\r\n[INFO|trainer_tf.py:398] 2021-01-15 16:21:55,652 >> {'loss': 0.27134755, 'learning_rate': 3.260869e-05, 'epoch': 1.391304347826087, 'step': 160}\r\n```\r\n\r\nThis comes from ```steps_trained_in_current_epoch``` and ```training_loss = self.train_loss.result() / (step + 1)``` in ```TFTrainer```.\r\nFor 41 step (1 step after running from cpkt-40 in this case), only one loss is accumulated but step is 40.\r\n\r\nI simply revise it by initially saving ```steps_trained_in_current_epoch``` to another constant. \r\nThis may be treated in different PR.",
"Humm looks to be an issue indeed. You can keep this PR open to fix this, feel free to ask questions if I can help :)"
] | 1,610 | 1,611 | 1,611 | CONTRIBUTOR | null | # What does this PR do?
The purpose of ```training_loss``` in``` TFTrainer``` is logging.
However, ```training_loss``` shows a very large number while it decreases.
And it is doubled with twice the ```gradient_accumulation_steps```
1. ```training_loss``` is accumulated during epochs. Now, it is only calculated in a step.
2. Like ```Trainer```, ```training_loss``` in ```TFTrainer``` considers ```n_replicas``` and ```gradient_accumulation_steps```.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
tensorflow: @jplu | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9613/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9613/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9613",
"html_url": "https://github.com/huggingface/transformers/pull/9613",
"diff_url": "https://github.com/huggingface/transformers/pull/9613.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9613.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9612 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9612/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9612/comments | https://api.github.com/repos/huggingface/transformers/issues/9612/events | https://github.com/huggingface/transformers/issues/9612 | 786,644,000 | MDU6SXNzdWU3ODY2NDQwMDA= | 9,612 | Why do not use 'torch.nn.MultiheadAttention' to substitude 'Class BertSelfAttention+BertSelfOutput' for pytorch | {
"login": "daydayfun",
"id": 39835967,
"node_id": "MDQ6VXNlcjM5ODM1OTY3",
"avatar_url": "https://avatars.githubusercontent.com/u/39835967?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daydayfun",
"html_url": "https://github.com/daydayfun",
"followers_url": "https://api.github.com/users/daydayfun/followers",
"following_url": "https://api.github.com/users/daydayfun/following{/other_user}",
"gists_url": "https://api.github.com/users/daydayfun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daydayfun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daydayfun/subscriptions",
"organizations_url": "https://api.github.com/users/daydayfun/orgs",
"repos_url": "https://api.github.com/users/daydayfun/repos",
"events_url": "https://api.github.com/users/daydayfun/events{/privacy}",
"received_events_url": "https://api.github.com/users/daydayfun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1897896961,
"node_id": "MDU6TGFiZWwxODk3ODk2OTYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Migration",
"name": "Migration",
"color": "e99695",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"'BertSelfAttention+BertSelfOutput' is tensorflow style\r\n'torch.nn.MultiheadAttention' is real pytorch style",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread.",
"The layers from the Pytorch will be significantly faster than using the two classes like in TensorFlow. Can't we make an exception for pytorch to use the optimized layers?"
] | 1,610 | 1,680 | 1,614 | NONE | null | # 📚 Migration
## Information
pytorch has 'torch.nn.MultiheadAttention'
https://pytorch.org/docs/1.3.0/nn.html?highlight=multihead#torch.nn.MultiheadAttention
## Details
1. For Better performace and generality, I suggest to use 'torch.nn.MultiheadAttention' instead of 'Class BertSelfAttention+BertSelfOutput' in BERT model
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9612/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9612/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9611 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9611/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9611/comments | https://api.github.com/repos/huggingface/transformers/issues/9611/events | https://github.com/huggingface/transformers/pull/9611 | 786,570,642 | MDExOlB1bGxSZXF1ZXN0NTU1NDE4ODcx | 9,611 | [bugs]: class DataCollatorForWholeWordMask e["input_ids"] not have the size,change to len() | {
"login": "johnson7788",
"id": 6083466,
"node_id": "MDQ6VXNlcjYwODM0NjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/6083466?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/johnson7788",
"html_url": "https://github.com/johnson7788",
"followers_url": "https://api.github.com/users/johnson7788/followers",
"following_url": "https://api.github.com/users/johnson7788/following{/other_user}",
"gists_url": "https://api.github.com/users/johnson7788/gists{/gist_id}",
"starred_url": "https://api.github.com/users/johnson7788/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johnson7788/subscriptions",
"organizations_url": "https://api.github.com/users/johnson7788/orgs",
"repos_url": "https://api.github.com/users/johnson7788/repos",
"events_url": "https://api.github.com/users/johnson7788/events{/privacy}",
"received_events_url": "https://api.github.com/users/johnson7788/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,610 | 1,614 | 1,614 | CONTRIBUTOR | null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9611/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9611/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9611",
"html_url": "https://github.com/huggingface/transformers/pull/9611",
"diff_url": "https://github.com/huggingface/transformers/pull/9611.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9611.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9610 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9610/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9610/comments | https://api.github.com/repos/huggingface/transformers/issues/9610/events | https://github.com/huggingface/transformers/pull/9610 | 786,510,175 | MDExOlB1bGxSZXF1ZXN0NTU1MzY2NTkz | 9,610 | [DeepSpeed docs] new information | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2659267025,
"node_id": "MDU6TGFiZWwyNjU5MjY3MDI1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed",
"name": "DeepSpeed",
"color": "4D34F7",
"default": false,
"description": ""
}
] | closed | false | null | [] | [] | 1,610 | 1,612 | 1,612 | CONTRIBUTOR | null | As I bombarded the DeepSpeed with multiple Issues, as the answers are starting to percolate back, so I will gather them in this PR. So I will let it sit for a while collecting updates, unless users will need those answers sooner.
* [x] how to run DeepSpeed with a 1 gpu which is not GPU 0 (`CUDA_VISIBLE_DEVICES` can't be used)
* [x] add a newly published paper to resources
* [x] various small additions/improvements | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9610/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 1,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9610/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9610",
"html_url": "https://github.com/huggingface/transformers/pull/9610",
"diff_url": "https://github.com/huggingface/transformers/pull/9610.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9610.patch",
"merged_at": 1612937780000
} |
https://api.github.com/repos/huggingface/transformers/issues/9609 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9609/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9609/comments | https://api.github.com/repos/huggingface/transformers/issues/9609/events | https://github.com/huggingface/transformers/pull/9609 | 786,504,348 | MDExOlB1bGxSZXF1ZXN0NTU1MzYxOTI4 | 9,609 | change masked_bias to -inf | {
"login": "xu-song",
"id": 13825126,
"node_id": "MDQ6VXNlcjEzODI1MTI2",
"avatar_url": "https://avatars.githubusercontent.com/u/13825126?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xu-song",
"html_url": "https://github.com/xu-song",
"followers_url": "https://api.github.com/users/xu-song/followers",
"following_url": "https://api.github.com/users/xu-song/following{/other_user}",
"gists_url": "https://api.github.com/users/xu-song/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xu-song/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xu-song/subscriptions",
"organizations_url": "https://api.github.com/users/xu-song/orgs",
"repos_url": "https://api.github.com/users/xu-song/repos",
"events_url": "https://api.github.com/users/xu-song/events{/privacy}",
"received_events_url": "https://api.github.com/users/xu-song/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, thank you for opening a PR!\r\n\r\nOur goal is to stay as close to the initial implementation as possible. The original implementation by OpenAI uses -1e4, so we will keep it this way.",
"I find the initial implementation is `-1e10` in https://github.com/openai/gpt-2/blob/master/src/model.py#L88\r\n\r\n```py\r\nw = w*b - tf.cast(1e10, w.dtype)*(1-b)\r\n```\r\nrelated issue #9594\r\n\r\nI am not quite sure, I guess `1e-10` is not compatible with `fp16`, that may be the reason behind huggingface implementation.\r\n"
] | 1,610 | 1,610 | 1,610 | CONTRIBUTOR | null | # What does this PR do?
change masked_bias to -inf
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9609/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9609/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9609",
"html_url": "https://github.com/huggingface/transformers/pull/9609",
"diff_url": "https://github.com/huggingface/transformers/pull/9609.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9609.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9608 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9608/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9608/comments | https://api.github.com/repos/huggingface/transformers/issues/9608/events | https://github.com/huggingface/transformers/issues/9608 | 786,497,692 | MDU6SXNzdWU3ODY0OTc2OTI= | 9,608 | Convert ckpt from TFTrainer to huggingface format. | {
"login": "kiyoungkim1",
"id": 37245002,
"node_id": "MDQ6VXNlcjM3MjQ1MDAy",
"avatar_url": "https://avatars.githubusercontent.com/u/37245002?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kiyoungkim1",
"html_url": "https://github.com/kiyoungkim1",
"followers_url": "https://api.github.com/users/kiyoungkim1/followers",
"following_url": "https://api.github.com/users/kiyoungkim1/following{/other_user}",
"gists_url": "https://api.github.com/users/kiyoungkim1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kiyoungkim1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kiyoungkim1/subscriptions",
"organizations_url": "https://api.github.com/users/kiyoungkim1/orgs",
"repos_url": "https://api.github.com/users/kiyoungkim1/repos",
"events_url": "https://api.github.com/users/kiyoungkim1/events{/privacy}",
"received_events_url": "https://api.github.com/users/kiyoungkim1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello!\r\n\r\nYou have to use the `save_model` method of the trainer.",
"ok. thanks!"
] | 1,610 | 1,610 | 1,610 | CONTRIBUTOR | null | I trained models with ```Trainer(pytorch)``` and ```TfTrainer(tensorflow)```, repectively.
With ```Trainer```, everything is ok. Saved models are directly applicable to huggingface pipeline (eg, AutoModel('model_name')).
But with save models from ```TFTrainer``` (ckpt format), I can not do that with ```AutoModel``` and ```TFAutoModel``` .
I can restart the training, so files do not have a problem.
I guess the problem is ckpt file all contains weight and other parameters related to optimizer.
How can I transform my ckpt file to huggingface-applicable format like ```tf_model.h5``` or convert to pytorch?
@jplu | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9608/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9608/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9607 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9607/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9607/comments | https://api.github.com/repos/huggingface/transformers/issues/9607/events | https://github.com/huggingface/transformers/issues/9607 | 786,470,266 | MDU6SXNzdWU3ODY0NzAyNjY= | 9,607 | [run_ner.py]You need to instantiate RobertaTokenizerFast with add_prefix_space=True to use it with pretokenized inputs | {
"login": "fangli80",
"id": 9782948,
"node_id": "MDQ6VXNlcjk3ODI5NDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9782948?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fangli80",
"html_url": "https://github.com/fangli80",
"followers_url": "https://api.github.com/users/fangli80/followers",
"following_url": "https://api.github.com/users/fangli80/following{/other_user}",
"gists_url": "https://api.github.com/users/fangli80/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fangli80/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fangli80/subscriptions",
"organizations_url": "https://api.github.com/users/fangli80/orgs",
"repos_url": "https://api.github.com/users/fangli80/repos",
"events_url": "https://api.github.com/users/fangli80/events{/privacy}",
"received_events_url": "https://api.github.com/users/fangli80/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi,\r\n\r\nI would like to report the same problem. I see this problem only with RoBERTa base or large and I am also using transformers4.2.2.\r\n\r\nAny suggestions or help would be appreciated. \r\nThanks.",
"Hi, \r\nI had the same issue. I solved it by adding add_prefix_space=True to the tokenizer.\r\n\r\nBest",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Hi,\r\n\r\nI am have the same issue. \r\nI am loading from json - \r\n\r\n`python $SCRATCH/transformers/examples/token-classification/run_ner.py \\\r\n --model_name_or_path roberta-base \\\r\n --train_file dict_structure/trivia_training.json \\\r\n --validation_file dict_structure/trivia_val.json \\\r\n --output_dir roberta_base_on_MITMovieNER/ \\\r\n --do_train \\\r\n --do_eval \\\r\n --per_device_train_batch_size 64 \\\r\n --per_device_eval_batch_size 20 \\\r\n --num_train_epochs 40 \\\r\n --overwrite_output_dir \\\r\n --evaluation_strategy steps \\\r\n --save_steps 1000 \\\r\n --eval_steps 500 \\\r\n --logging_first_step \\`\r\n\r\nSorry, not sure if this is an issue on my end. @stefan-it ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"This remains an issue using the official example and official task; it would be great to see this addressed."
] | 1,610 | 1,623 | 1,623 | NONE | null | ## Environment info
- `transformers` version: 4.2.0
- Platform: Linux-5.4.0-53-generic-x86_64-with-debian-buster-sid
- Python version: 3.6.12
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help
examples/token-classification: @stefan-it
tokenizers: @mfuntowicz
## Information
Model I am using Roberta:
The problem arises when using:
* The official example scripts: `transformers/examples/token-classification/run_ner.py`
The tasks I am working on is:
* an official task: Named Entity Recognition on `CoNLL 2003`
## To reproduce
Steps to reproduce the behavior:
run this command:
`python ./transformers/examples/token-classification/run_ner.py --model_name_or_path roberta-base --dataset_name conll2003 --output_dir ./roberta_base_cased_conll2003 --do_train --do_eval`
I am using the `run_ner.py` of a very recent commit: `126fd281`
```
$ md5sum run_ner.py
cb6401e787266812f791a1e3052465d3 run_ner.py
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I got this error:
```
AssertionError: You need to instantiate RobertaTokenizerFast with add_prefix_space=True to use it with pretokenized inputs.
```
I tested other models, such as `bert-base-cased`, `bert-large-cased`, `xlm-roberta-base`, `xlnet-base-cased`. All of these worked. But `roberta-base` and `roberta-large` have this error.
This is the full output on screen:
```
01/14/2021 20:34:28 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 2distributed training: False, 16-bits training: False
01/14/2021 20:34:28 - INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir=./roberta_base_cased_conll2003, overwrite_output_dir=False, do_train=True, do_eval=True, do_predict=False, evaluation_strategy=EvaluationStrategy.NO, prediction_loss_only=False, per_device_train_batch_size=8, per_device_eval_batch_size=8, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=3.0, max_steps=-1, lr_scheduler_type=SchedulerType.LINEAR, warmup_steps=0, logging_dir=runs/Jan14_20-34-28_ubuntu18, logging_first_step=False, logging_steps=500, save_steps=500, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level=O1, fp16_backend=auto, local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name=./roberta_base_cased_conll2003, disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, sharded_ddp=False, deepspeed=None, label_smoothing_factor=0.0, adafactor=False, _n_gpu=2)
Reusing dataset conll2003 (/home/fangli/.cache/huggingface/datasets/conll2003/conll2003/1.0.0/63ba56944e35c1943434322a07ceefd79864672041b7834583709af4a5de4664)
[INFO|configuration_utils.py:445] 2021-01-14 20:34:29,366 >> loading configuration file https://huggingface.co/roberta-base/resolve/main/config.json from cache at /home/fangli/.cache/huggingface/transformers/733bade19e5f0ce98e6531021dd5180994bb2f7b8bd7e80c7968805834ba351e.35205c6cfc956461d8515139f0f8dd5d207a2f336c0c3a83b4bc8dca3518e37b
[INFO|configuration_utils.py:481] 2021-01-14 20:34:29,366 >> Model config RobertaConfig {
"architectures": [
"RobertaForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"eos_token_id": 2,
"finetuning_task": "ner",
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1",
"2": "LABEL_2",
"3": "LABEL_3",
"4": "LABEL_4",
"5": "LABEL_5",
"6": "LABEL_6",
"7": "LABEL_7",
"8": "LABEL_8"
},
"initializer_range": 0.02,
"intermediate_size": 3072,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1,
"LABEL_2": 2,
"LABEL_3": 3,
"LABEL_4": 4,
"LABEL_5": 5,
"LABEL_6": 6,
"LABEL_7": 7,
"LABEL_8": 8
},
"layer_norm_eps": 1e-05,
"max_position_embeddings": 514,
"model_type": "roberta",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"transformers_version": "4.2.0",
"type_vocab_size": 1,
"use_cache": true,
"vocab_size": 50265
}
[INFO|configuration_utils.py:445] 2021-01-14 20:34:29,405 >> loading configuration file https://huggingface.co/roberta-base/resolve/main/config.json from cache at /home/fangli/.cache/huggingface/transformers/733bade19e5f0ce98e6531021dd5180994bb2f7b8bd7e80c7968805834ba351e.35205c6cfc956461d8515139f0f8dd5d207a2f336c0c3a83b4bc8dca3518e37b
[INFO|configuration_utils.py:481] 2021-01-14 20:34:29,405 >> Model config RobertaConfig {
"architectures": [
"RobertaForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"eos_token_id": 2,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-05,
"max_position_embeddings": 514,
"model_type": "roberta",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"transformers_version": "4.2.0",
"type_vocab_size": 1,
"use_cache": true,
"vocab_size": 50265
}
[INFO|tokenization_utils_base.py:1760] 2021-01-14 20:34:29,584 >> loading file https://huggingface.co/roberta-base/resolve/main/vocab.json from cache at /home/fangli/.cache/huggingface/transformers/d3ccdbfeb9aaa747ef20432d4976c32ee3fa69663b379deb253ccfce2bb1fdc5.d67d6b367eb24ab43b08ad55e014cf254076934f71d832bbab9ad35644a375ab
[INFO|tokenization_utils_base.py:1760] 2021-01-14 20:34:29,585 >> loading file https://huggingface.co/roberta-base/resolve/main/merges.txt from cache at /home/fangli/.cache/huggingface/transformers/cafdecc90fcab17011e12ac813dd574b4b3fea39da6dd817813efa010262ff3f.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b
[INFO|tokenization_utils_base.py:1760] 2021-01-14 20:34:29,585 >> loading file https://huggingface.co/roberta-base/resolve/main/tokenizer.json from cache at /home/fangli/.cache/huggingface/transformers/d53fc0fa09b8342651efd4073d75e19617b3e51287c2a535becda5808a8db287.fc9576039592f026ad76a1c231b89aee8668488c671dfbe6616bab2ed298d730
[INFO|modeling_utils.py:1027] 2021-01-14 20:34:29,701 >> loading weights file https://huggingface.co/roberta-base/resolve/main/pytorch_model.bin from cache at /home/fangli/.cache/huggingface/transformers/51ba668f7ff34e7cdfa9561e8361747738113878850a7d717dbc69de8683aaad.c7efaa30a0d80b2958b876969faa180e485944a849deee4ad482332de65365a7
[WARNING|modeling_utils.py:1135] 2021-01-14 20:34:32,134 >> Some weights of the model checkpoint at roberta-base were not used when initializing RobertaForTokenClassification: ['lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight']
- This IS expected if you are initializing RobertaForTokenClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing RobertaForTokenClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
[WARNING|modeling_utils.py:1146] 2021-01-14 20:34:32,134 >> Some weights of RobertaForTokenClassification were not initialized from the model checkpoint at roberta-base and are newly initialized: ['classifier.weight', 'classifier.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Traceback (most recent call last):
File "/home/fangli/github/transformers/examples/token-classification/run_ner.py", line 428, in <module>
main()
File "/home/fangli/github/transformers/examples/token-classification/run_ner.py", line 319, in main
load_from_cache_file=not data_args.overwrite_cache,
File "/home/fangli/anaconda3/envs/nlp/lib/python3.6/site-packages/datasets/dataset_dict.py", line 303, in map
for k, dataset in self.items()
File "/home/fangli/anaconda3/envs/nlp/lib/python3.6/site-packages/datasets/dataset_dict.py", line 303, in <dictcomp>
for k, dataset in self.items()
File "/home/fangli/anaconda3/envs/nlp/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1240, in map
update_data = does_function_return_dict(test_inputs, test_indices)
File "/home/fangli/anaconda3/envs/nlp/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1211, in does_function_return_dict
function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
File "/home/fangli/github/transformers/examples/token-classification/run_ner.py", line 290, in tokenize_and_align_labels
is_split_into_words=True,
File "/home/fangli/anaconda3/envs/nlp/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 2329, in __call__
**kwargs,
File "/home/fangli/anaconda3/envs/nlp/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 2514, in batch_encode_plus
**kwargs,
File "/home/fangli/anaconda3/envs/nlp/lib/python3.6/site-packages/transformers/models/gpt2/tokenization_gpt2_fast.py", line 155, in _batch_encode_plus
f"You need to instantiate {self.__class__.__name__} with add_prefix_space=True "
AssertionError: You need to instantiate RobertaTokenizerFast with add_prefix_space=True to use it with pretokenized inputs.
```
Thanks for help!
Best,
Li | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9607/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9607/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9606 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9606/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9606/comments | https://api.github.com/repos/huggingface/transformers/issues/9606/events | https://github.com/huggingface/transformers/issues/9606 | 786,390,547 | MDU6SXNzdWU3ODYzOTA1NDc= | 9,606 | [DeepSpeed] Features to integrate / Optimizations to add / Experiments to do | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
},
{
"id": 2659267025,
"node_id": "MDU6TGFiZWwyNjU5MjY3MDI1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed",
"name": "DeepSpeed",
"color": "4D34F7",
"default": false,
"description": ""
}
] | open | false | null | [] | [
"hi,we noticed Deepspeed transformer kernel is much faster than the original PyTorch version with less memory consumption. I would like to know if you have any future plan to integrate Deepspeed transformer kernel in huggingface.\r\nThanks!",
"Personally my focus at the moment is to enable fitting big models on small hardware, because if we can do such training slowly it's better than not being able to do so.\r\n\r\nNext come the speed optimizations.\r\n\r\nI added `Deepspeed transformer kernel` to the list above. Thank you for the recommendation.\r\n\r\nBut if you'd like to do some experimentation and get some good results and submit a PR that would be fantastic. It doesn't have to be perfect, just good enough that it can be seen the speed up improvement the docs are alluding to.",
"> Personally my focus at the moment is to enable fitting big models on small hardware, because if we can do such training slowly it's better than not being able to do so.\r\n> \r\n> Next come the speed optimizations.\r\n> \r\n> I added `Deepspeed transformer kernel` to the list above. Thank you for the recommendation.\r\n> \r\n> But if you'd like to do some experimentation and get some good results and submit a PR that would be fantastic. It doesn't have to be perfect, just good enough that it can be seen the speed up improvement the docs are alluding to.\r\n\r\nHi, I did a simple test with the bert-large model,The following are the test results\r\n\r\n\r\n",
"Thank you for sharing the benchmarks, @gongjingcs\r\n\r\nThat's a nice speed up. \r\n\r\nI assume you also tested deepspeed w/o \"Deepspeed transformer kernel\" as a baseline, to know that it's that feature that gave the speed up and not DeepSpeed's other features.\r\n\r\nI encourage you to try to make a PR to integrate this aspect of Deepspeed if you are inspired to do so.",
"Hi @stas00, \r\n\r\nThank you for sharing those awesome topics. Are the features still requested/up-to-date ? I would like to follow the point made by @gongjingcs about the Deepspeed Transformer Kernel. ",
"Hi Simon,\r\n\r\nre: up-to-date I'm sure Deepspeed came up with new advancements since this was last updated, if that's what you asking about. And the list in the OP is still outstanding.\r\n\r\nSo wrt Deepspeed Transformer Kernel. How would you envision us integrating it - i.e. which components of HF transformers do you want? HF models have a lot of features inside the transformer layers, so swapping in a different Transformer block won't work easily. pytorch too has a Transformer block in its arsenal.\r\n\r\nIn other words I'm seeing to understand how you see those replacements to be used?\r\n\r\nAdditionally are you after inference or training? For inference we will soon have fast fused kernels via:\r\nhttps://github.com/huggingface/transformers/pull/14426 and @hyunwoongko has just announced https://github.com/tunib-ai/oslo https://github.com/huggingface/transformers/issues/13690#issuecomment-998492192 which does kernel fusion, though we haven't done any benchmarking yet, but check it out.\r\n\r\nThank you!",
"Thank you for your answer @stas00 \r\n\r\n> re: up-to-date I'm sure Deepspeed came up with new advancements since this was last updated, if that's what you asking about. And the list in the OP is still outstanding.\r\n\r\nI was looking at the features you provided in the list and wondered if they were still requested or if anyone was already working on it.\r\n\r\n> So wrt Deepspeed Transformer Kernel. How would you envision us integrating it - i.e. which components of HF transformers do you want? HF models have a lot of features inside the transformer layers, so swapping in a different Transformer block won't work easily. pytorch too has a Transformer block in its arsenal.\r\n> \r\n> In other words I'm seeing to understand how you see those replacements to be used?\r\n\r\nI just finished to benchmark the Transformer Kernel with the models provide in the DeepSpeedExamples repo. So I don't have a clear plan on how to do this. I was wondering if we could first do an in-place operation to swap out the Transformer layer in the Trainer s.t we can keep the HF components code unchanged while taking advantage of the throughput speed-up and the batch size improvement provided. But I don't know if it will impact other features.\r\n\r\n> Additionally are you after inference or training? For inference we will soon have fast fused kernels via:\r\n> #14426 and @hyunwoongko has just announced https://github.com/tunib-ai/oslo #13690 (comment) which does kernel fusion, though we haven't done any benchmarking yet, but check it out.\r\n\r\nI have been focusing on training: pre-training and fine-tuning. I haven't look at the deepspeed pre-training yet. OSLO seems really nice, do you think it's still worth looking at the deepspeed Transformer Kernel ?\r\n\r\nThank you ",
"The problem is that the weight names will be different and any custom features that HF Transformers model expects will not be provided by an external implementation. You can try to import the \"normal\" model and then monkeypatching the transformers layer to the deepspeed version and see if you get anywhere with it. \r\n\r\nAnd which architecture are you trying to speed up?\r\n\r\nI'm yet to try OSLO myself, so can't give any first hand experience, but since it suggests that it can fuse the model, perhaps it can do much better already than the plain pytorch version. I'd make a request at https://github.com/tunib-ai/oslo to support the arch you want and compare the performance. That would probably be the low hanging fruit.\r\n\r\nThen you can also try to compile the model into ONNX as described here https://huggingface.co/docs/transformers/serialization and use one of the optimized runtimes. But I don't yet have an experience with that tech yet, hoping to fill the gap in the new year.\r\n",
"OSLO only fuses certain parts, just like Megatron-LM. (scale+mask+softmax, bias+gelu, bias+dropout) Therefore, it is slower than the fully fusable kernels like DeepSpeed. I also reviewed DeepSpeed's transformer kernel (not the inference kernel), but I gave up because it is a structure that is difficult to apply to various architectures and cannot do tensor model parallelization.",
"On the other hand, DeepSpeed inference is a much more scalable structure. It can also perform tensor model parallelization. However, no backward kernel is provided. It would be nice if @RezaYazdaniAminabadi could provide a backward kernels. (If the backward kernels are available, I will also add them to OSLO)",
"Note that there are also lightseq kernels by bytedance which improve DeepSpeed transformer kernels.\r\nhttps://github.com/bytedance/lightseq The speed of the kernels is similar, but various kernels have been added (embedding, cross-entropy, etc...) and It provides a little more flexible Pybind API.",
"Hi, @stas00 , could you please confirm that [DeepSpeed Activation Checkpointing] is working properly?\r\nI was seeing some issues with activation partitioning feature (I need it to reduce activation memory usage)\r\nAlso, where are the code changes located for this feature?\r\nThanks!",
"we currently don't use Deepspeed's Activation Checkpointing as it'd be very difficult to integrate into transformers (it'd require massively changing all models). The normal pytorch activation available in most models works just fine. To activate it use this API:\r\nhttps://huggingface.co/docs/transformers/main_classes/model#transformers.PreTrainedModel.gradient_checkpointing_enable\r\n\r\nDeepspeed's Activation Checkpointing however has additional features that pytorch implementation lacks.\r\n\r\n"
] | 1,610 | 1,686 | null | CONTRIBUTOR | null | # 🚀 Feature request
While we have the support for main DeepSpeed features integrated, there are other powerful features that haven't been explored yet and which can provide even more various performance boosts. Some will probably require no changes on our side, while others require changes in the model and/or trainer.
This issue is to track what's possible and the priorities if any.
## Features to integrate
* [ ] [1-bit Adam](https://www.deepspeed.ai/tutorials/onebit-adam/) - Up to 5x less communication volume and up to 2x faster training
* [ ] [Progressive Layer Dropping](https://www.deepspeed.ai/tutorials/progressive_layer_dropping/) - Accelerating Training of Transformer-Based Language Models
* [ ] [DeepSpeed Sparse Attention](https://www.deepspeed.ai/tutorials/sparse-attention/) (Seems to be limited only to NVIDIA V100 )
* [ ] [DeepSpeed Transformer Kernel](https://www.deepspeed.ai/tutorials/transformer_kernel/) [api](https://deepspeed.readthedocs.io/en/latest/kernel.html)
Irrelevant to `transformers`:
* [ ] [DeepSpeed Activation Checkpointing](https://www.deepspeed.ai/docs/config-json/#activation-checkpointing) and extra discussion [here](https://github.com/microsoft/DeepSpeed/issues/665#issuecomment-760512582) - reduce the activation memory during model parallel training by partitioning activation checkpoints across model parallel GPUs, or offloading them to CPU. Since we don't use DS's PP there is no use for it.
## Experiments
Things to experiment with as well:
* [ ] try to profile model performance with DeepSpeed's `FlopsProfiler`
## Optimizations
* [ ] the new zero3 has a special requirement for inference with `--predict_with_generate` that all gpus run all `forward` calls even if they finished completing the predicted sequence early in `generate` - otherwise other gpus will hang waiting for the one that finished early. So currently the workaround is to simply always run till `max_length` in the `while` loop is reached. Which might be inefficient if we have a lot of short sequences, so need to use a synchronization trick to simultaneously quit the `while` loop when all gpus know it's safe to do so. @samyam posted a proof-of-concept for how to do that:
> We could maybe simplify by doing a single all_reduce, where gpus that are done will use a tensor with 0.0 and those that are not done will use 1.0. If the result of all reduce is 0.0 then everyone can stop, otherwise gpus that are done will do fake forward.
```
while sync.item() > 0.0:
p = model.forward(fake_input if am_i_done() else real_input)
sync =torch.tensor(0.0 if am_i_done() else 1.0)
torch.distributed.allreduce(sync)
```
At the moment this needs to be done in 5 places in the various search functions that `generate` may call.
For the full context please see: [this thread](https://github.com/microsoft/DeepSpeed/issues/860#issuecomment-799936583).
-------------------
If anybody would like to work on any of these items please open a dedicated issue so it'd be easier to track and please tag @stas00 to it.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9606/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9606/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9605 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9605/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9605/comments | https://api.github.com/repos/huggingface/transformers/issues/9605/events | https://github.com/huggingface/transformers/pull/9605 | 786,386,616 | MDExOlB1bGxSZXF1ZXN0NTU1MjY0ODk3 | 9,605 | New run_seq2seq script | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Could we discuss the naming of this script and others?\r\n\r\nThe description goes:\r\n\r\n> Fine-tuning the library models for sequence to sequence.\r\n\r\n`run_seq2seq.py` is much less descriptive or intuitive than `finetune_trainer.py `- why not go back to `finetune.py` to replace the script that was PL-based and moved to experiments? \r\n\r\n1. If we are cleaning up the naming, just as well we could drop any `run_` prefices that we now have in many `examples/*/run_*.py` - they are all scripts, they all get to **run**. The names are great when they are focused on their purpose and not how they are executed. \r\n\r\n2. This script is already inside `seq2seq` - So in `examples/seq2seq/run_seq2seq.py` - how does it help to repeat it twice? I can see where you'd want to uniquely identify each script if they are taken out of context of their `examples/*/` subdir - perhaps this is the intention? perhaps if you open them all in the editor and end up with 10 `finetune.py`? If that's the case, then I can see your point of repeating the \"domain\" in the name of the script.\r\n\r\nIf the 2nd item is trying to solve the uniqueness issue, then repetition works just fine, but I strongly recommend replacing `run_` with `finetune_` to at the very least have some mnemonics about what it does.\r\n\r\n",
"Also, since you will need to update README.md to show users how to run the new script - could we have some of it in this PR? Even just the basic command lines - that would help testing this PR and not needing to figure out the new args? \r\n\r\nIf possible that is?\r\n\r\nThank you!",
"The scripts are all named `run_xxx` precisely for reason 2, the same way we didn't rename `modeling_xxx` files to just `modeling.py` when restructuring the repo. I have no strong objection to changing `run` to `finetune` but it will break lots of links in the documentation and may confuse users, so not sure if it's worth it. I'll let @LysandreJik and @patrickvonplaten chime in on that subject.\r\n\r\nI'll add command examples in the README (this PR is not quite ready to be merged yet, there is also the small test to add), first I wanted to grab comments on the actual script before finishing :-) I don't expect it to work fully (though if it's magically the case I'll be happy :-) ) which is why this PR does not delete the old script, so we can make some tests and make sure there is no regression, then progressively fix this new script as needed.",
"> The scripts are all named run_xxx precisely for reason 2 [...]\r\n\r\nI understand. Thank you for clarifying that part. Easy editing is a strong pro for sure.\r\n\r\nThinking more about it perhaps `finetune` isn't the right name either because it does finetuning plus prediction, so perhaps `run` actually is somewhat of a better choice, as it's less committing to anything ;)\r\n\r\nI think what I'm experiencing here is the pain of pattern breaking. First I was using \"finetune.py\", then I switched to \"finetune_trainer.py\" and now \"run_seq2seq.py\" - say, what? :)\r\n\r\n> I'll add command examples in the README [...]\r\n\r\nI'd like to contribute with the review, but I need context to do such things and there is neither diff nor a way to run it, I'm just not sure how to approach such type of review. So perhaps I will be able to do that at a later stage when I can test the new script, or if you'd like me to look at a particular part of it I'm game too.",
"I am merging as a first step. @stas00 I know it's missing examples of use and that there still is the memory regression, I plan to address those in follow-up PRs (also anyone should feel free to suggest improvements to the new scripts).",
"It's a good plan, @sgugger! I know you won't forget these. Thank you for considering my concerns.\r\n\r\nThe only thing I am not sure about is that nobody commented on the new script's naming."
] | 1,610 | 1,611 | 1,611 | COLLABORATOR | null | # What does this PR do?
This PR adds a new version of `finetune_trainer` using datasets and completely self-contained (not using anything in the `utils` module or any other python script of the seq2seq folder). I renamed a few args from the old script, mainly:
- `n_train` -> `max_train_samples`
- `n_val` -> `max_val_samples`
- `src_lang` -> `source_lang`
- `tgt_lang` -> `target_lang`
because they were really too short and uninformative. I didn't touch the other ones for backward compatibility (but since the name of the script will change, we can change more if we feel like it). In any case, the way the dataset main arguments is a breaking change compared to the old script.
The following are the features from the old script not implemented yet and will follow either in this PR or in follow-up PRs:
- [x ] Add a small test on some dummy data (Do not merge before this one is ticked)
- [ ] Ability to freeze the encoder / embeddings
- [ ] Pass a test set for predictions | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9605/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9605/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9605",
"html_url": "https://github.com/huggingface/transformers/pull/9605",
"diff_url": "https://github.com/huggingface/transformers/pull/9605.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9605.patch",
"merged_at": 1611087739000
} |
https://api.github.com/repos/huggingface/transformers/issues/9604 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9604/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9604/comments | https://api.github.com/repos/huggingface/transformers/issues/9604/events | https://github.com/huggingface/transformers/issues/9604 | 786,254,814 | MDU6SXNzdWU3ODYyNTQ4MTQ= | 9,604 | Mistake in the "Summary of the tasks" article | {
"login": "BunnyNoBugs",
"id": 43185527,
"node_id": "MDQ6VXNlcjQzMTg1NTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/43185527?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BunnyNoBugs",
"html_url": "https://github.com/BunnyNoBugs",
"followers_url": "https://api.github.com/users/BunnyNoBugs/followers",
"following_url": "https://api.github.com/users/BunnyNoBugs/following{/other_user}",
"gists_url": "https://api.github.com/users/BunnyNoBugs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BunnyNoBugs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BunnyNoBugs/subscriptions",
"organizations_url": "https://api.github.com/users/BunnyNoBugs/orgs",
"repos_url": "https://api.github.com/users/BunnyNoBugs/repos",
"events_url": "https://api.github.com/users/BunnyNoBugs/events{/privacy}",
"received_events_url": "https://api.github.com/users/BunnyNoBugs/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Indeed! Do you want to open a PR with a doc fix?",
"I would like to, but I couldn't find where that exact doc in the repo is",
"Also, a built-in tool for pointing out mistakes in the docs would be usable (like those when you highlight an error and press Ctrl+Enter). I notice a few mistakes and typos from time to time.\r\nI am speaking not about the docs themselves, but about the guides and tutorials which are more community-oriented",
"> Indeed! Do you want to open a PR with a doc fix?\r\n\r\n@LysandreJik, could you please point out the doc I could fix?",
"Here it is: https://github.com/huggingface/transformers/blob/master/docs/source/task_summary.rst",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,610 | 1,614 | 1,614 | NONE | null | Two first points of the translation process are duplicating two first points of summarization process:
[https://huggingface.co/transformers/task_summary.html#translation](https://huggingface.co/transformers/task_summary.html#translation)
> 1. Instantiate a tokenizer and a model from the checkpoint name. Summarization is usually done using an encoder-decoder model, such as Bart or T5.
> 2. Define the article that should be summarized. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9604/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9604/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9603 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9603/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9603/comments | https://api.github.com/repos/huggingface/transformers/issues/9603/events | https://github.com/huggingface/transformers/issues/9603 | 786,233,716 | MDU6SXNzdWU3ODYyMzM3MTY= | 9,603 | TypeError: on_init_end() got an unexpected keyword argument 'model' | {
"login": "MiriamFarber",
"id": 35157503,
"node_id": "MDQ6VXNlcjM1MTU3NTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/35157503?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MiriamFarber",
"html_url": "https://github.com/MiriamFarber",
"followers_url": "https://api.github.com/users/MiriamFarber/followers",
"following_url": "https://api.github.com/users/MiriamFarber/following{/other_user}",
"gists_url": "https://api.github.com/users/MiriamFarber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MiriamFarber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MiriamFarber/subscriptions",
"organizations_url": "https://api.github.com/users/MiriamFarber/orgs",
"repos_url": "https://api.github.com/users/MiriamFarber/repos",
"events_url": "https://api.github.com/users/MiriamFarber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MiriamFarber/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"You are using a pytorch lightning callback instead of a Hugging Face `TrainerCallback`, I'm unsure of why you would think this will work. If you want to use pytorch ligthning, you will have to user their `Trainer` as well.",
"thanks @sgugger . The reason I used pytorch lightning callback is because I couldn't find in transformers something that saves only the best checkpoint. Is there something like that? (which I can use instead of using pytorch-lightings- ModelCheckpoint)",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread.",
"> You are using a pytorch lightning callback instead of a Hugging Face `TrainerCallback`, I'm unsure of why you would think this will work. If you want to use pytorch ligthning, you will have to user their `Trainer` as well.\r\n\r\nSuper Helpful"
] | 1,610 | 1,667 | 1,614 | NONE | null | ## Environment info
- `transformers` version: 4.0.0
- Python 3.6.10
- Pytorch version: 1.6.0
- pytorch-lightning version: 1.0.3
I'm using aws_neuron_pytorch_p36 virtual environment (on p3 ec2 instance). Regarding pytorch-lightning version, the above version is the highest one I can currently use (higher versions are not supported in my framework)
### Who can help
@LysandreJik
@sgugger
## Information
Model I am using RobertaForSequenceClassification
## To reproduce
The code I'm running:
```
from transformers import RobertaForSequenceClassification
from transformers import Trainer, TrainingArguments
from pytorch_lightning.callbacks.early_stopping import EarlyStopping
from pytorch_lightning.callbacks import ModelCheckpoint
from pytorch_lightning import metrics
model = RobertaForSequenceClassification.from_pretrained(
self.training_configuration.hyper_params.pretrained_model_path, num_labels=2)
model_checkpoint = ModelCheckpoint(filepath = model_path,
verbose=1,
save_top_k=1,
save_weights_only=True,
monitor=self.training_configuration.monitor,
mode=self.training_configuration.monitor_mode,
period=1)
early_stopping = EarlyStopping(monitor=self.training_configuration.monitor,
patience=self.training_configuration.patience,
mode=self.training_configuration.monitor_mode)
training_args = TrainingArguments(
output_dir=os.path.dirname(model_path), # output directory
evaluation_strategy="epoch", # Evaluation is done at the end of each epoch.
num_train_epochs=self.training_configuration.epoch, # total number of training epochs
per_device_train_batch_size=self.training_configuration.batch_size, # batch size per device during training
per_device_eval_batch_size=self.training_configuration.batch_size, # batch size for evaluation
warmup_steps=warmup_steps, # number of warmup steps for learning rate scheduler
weight_decay=self.training_configuration.hyper_params.weight_decay, # strength of weight decay
save_total_limit=1, # limit the total amount of checkpoints. Deletes the older checkpoints.
)
trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=training_data, # training dataset
eval_dataset=validation_data, # evaluation dataset
callbacks=[early_stopping, model_checkpoint],
compute_metrics = metrics.classification.Accuracy()
)
trainer.train()
```
The error I'm getting:
```
compute_metrics = metrics.classification.Accuracy()
File "/home/ec2-user/anaconda3/envs/aws_neuron_pytorch_p36/lib/python3.6/site-packages/transformers/trainer.py", line 305, in __init__
self.control = self.callback_handler.on_init_end(self.args, self.state, self.control)
File "/home/ec2-user/anaconda3/envs/aws_neuron_pytorch_p36/lib/python3.6/site-packages/transformers/trainer_callback.py", line 331, in on_init_end
return self.call_event("on_init_end", args, state, control)
File "/home/ec2-user/anaconda3/envs/aws_neuron_pytorch_p36/lib/python3.6/site-packages/transformers/trainer_callback.py", line 382, in call_event
**kwargs,
TypeError: on_init_end() got an unexpected keyword argument 'model'
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9603/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9603/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9602 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9602/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9602/comments | https://api.github.com/repos/huggingface/transformers/issues/9602/events | https://github.com/huggingface/transformers/issues/9602 | 786,233,139 | MDU6SXNzdWU3ODYyMzMxMzk= | 9,602 | TypeError: on_init_end() got an unexpected keyword argument 'model' | {
"login": "MiriamFarber",
"id": 35157503,
"node_id": "MDQ6VXNlcjM1MTU3NTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/35157503?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MiriamFarber",
"html_url": "https://github.com/MiriamFarber",
"followers_url": "https://api.github.com/users/MiriamFarber/followers",
"following_url": "https://api.github.com/users/MiriamFarber/following{/other_user}",
"gists_url": "https://api.github.com/users/MiriamFarber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MiriamFarber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MiriamFarber/subscriptions",
"organizations_url": "https://api.github.com/users/MiriamFarber/orgs",
"repos_url": "https://api.github.com/users/MiriamFarber/repos",
"events_url": "https://api.github.com/users/MiriamFarber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MiriamFarber/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,610 | 1,610 | 1,610 | NONE | null | ## Environment info
- `transformers` version: 4.0.0
- Python 3.6.10
- Python version: 1.6.0
I'm using aws_neuron_pytorch_p36 virtual environment (on p3 ec2 instance)
### Who can help
@LysandreJik
@sgugger
## Information
Model I am using RobertaForSequenceClassification
## To reproduce
The code I'm running:
```
from transformers import RobertaForSequenceClassification
from transformers import Trainer, TrainingArguments
from pytorch_lightning.callbacks.early_stopping import EarlyStopping
from pytorch_lightning.callbacks import ModelCheckpoint
from pytorch_lightning import metrics
model = RobertaForSequenceClassification.from_pretrained(
self.training_configuration.hyper_params.pretrained_model_path, num_labels=2)
model_checkpoint = ModelCheckpoint(filepath = model_path,
verbose=1,
save_top_k=1,
save_weights_only=True,
monitor=self.training_configuration.monitor,
mode=self.training_configuration.monitor_mode,
period=1)
early_stopping = EarlyStopping(monitor=self.training_configuration.monitor,
patience=self.training_configuration.patience,
mode=self.training_configuration.monitor_mode)
training_args = TrainingArguments(
output_dir=os.path.dirname(model_path), # output directory
evaluation_strategy="epoch", # Evaluation is done at the end of each epoch.
num_train_epochs=self.training_configuration.epoch, # total number of training epochs
per_device_train_batch_size=self.training_configuration.batch_size, # batch size per device during training
per_device_eval_batch_size=self.training_configuration.batch_size, # batch size for evaluation
warmup_steps=warmup_steps, # number of warmup steps for learning rate scheduler
weight_decay=self.training_configuration.hyper_params.weight_decay, # strength of weight decay
save_total_limit=1, # limit the total amount of checkpoints. Deletes the older checkpoints.
)
trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=training_data, # training dataset
eval_dataset=validation_data, # evaluation dataset
callbacks=[early_stopping, model_checkpoint],
compute_metrics = metrics.classification.Accuracy()
)
trainer.train()
```
The error I'm getting:
```
compute_metrics = metrics.classification.Accuracy()
File "/home/ec2-user/anaconda3/envs/aws_neuron_pytorch_p36/lib/python3.6/site-packages/transformers/trainer.py", line 305, in __init__
self.control = self.callback_handler.on_init_end(self.args, self.state, self.control)
File "/home/ec2-user/anaconda3/envs/aws_neuron_pytorch_p36/lib/python3.6/site-packages/transformers/trainer_callback.py", line 331, in on_init_end
return self.call_event("on_init_end", args, state, control)
File "/home/ec2-user/anaconda3/envs/aws_neuron_pytorch_p36/lib/python3.6/site-packages/transformers/trainer_callback.py", line 382, in call_event
**kwargs,
TypeError: on_init_end() got an unexpected keyword argument 'model'
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9602/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9602/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9601 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9601/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9601/comments | https://api.github.com/repos/huggingface/transformers/issues/9601/events | https://github.com/huggingface/transformers/pull/9601 | 786,178,872 | MDExOlB1bGxSZXF1ZXN0NTU1MDg3MDQx | 9,601 | [TF Led] Fix wrong decoder attention mask behavior | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,610 | 1,610 | 1,610 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR fixes TF LED. I wrongly added some lines to TFLed that automatically change the attention mask. However, this is incorrect behavior and not present in the PT version of the model. Sadly, I discovered this now after the release yesterday. @LysandreJik do you think we can patch this fix to circumvent breaking backward compatibility (but it's a bug IMO anyway).
This also fixes consequencetly the flaky `let_pt_tf_equivalence` test. I ran the test 40 times and it does not fail anymore.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9601/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9601/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9601",
"html_url": "https://github.com/huggingface/transformers/pull/9601",
"diff_url": "https://github.com/huggingface/transformers/pull/9601.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9601.patch",
"merged_at": 1610710828000
} |
https://api.github.com/repos/huggingface/transformers/issues/9600 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9600/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9600/comments | https://api.github.com/repos/huggingface/transformers/issues/9600/events | https://github.com/huggingface/transformers/pull/9600 | 786,167,104 | MDExOlB1bGxSZXF1ZXN0NTU1MDc3MjY4 | 9,600 | Speed up RepetitionPenaltyLogitsProcessor (pytorch) | {
"login": "LSinev",
"id": 12072891,
"node_id": "MDQ6VXNlcjEyMDcyODkx",
"avatar_url": "https://avatars.githubusercontent.com/u/12072891?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LSinev",
"html_url": "https://github.com/LSinev",
"followers_url": "https://api.github.com/users/LSinev/followers",
"following_url": "https://api.github.com/users/LSinev/following{/other_user}",
"gists_url": "https://api.github.com/users/LSinev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LSinev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LSinev/subscriptions",
"organizations_url": "https://api.github.com/users/LSinev/orgs",
"repos_url": "https://api.github.com/users/LSinev/repos",
"events_url": "https://api.github.com/users/LSinev/events{/privacy}",
"received_events_url": "https://api.github.com/users/LSinev/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,610 | 1,619 | 1,611 | CONTRIBUTOR | null | # What does this PR do?
Speeds up RepetitionPenaltyLogitsProcessor using torch gather-scatter functions. Tested on pytorch 1.4.0.
Here's a minimal example to reproduce the slow behavior (and test speed of improvements):
```
import torch
from transformers import RepetitionPenaltyLogitsProcessor, LogitsProcessor
import timeit
import sys
class RepetitionPenaltyLogitsProcessorNew(LogitsProcessor):
r"""
:class:`transformers.LogitsProcessor` enforcing an exponential penalty on repeated sequences.
Args:
repetition_penalty (:obj:`float`):
The parameter for repetition penalty. 1.0 means no penalty. See `this paper
<https://arxiv.org/pdf/1909.05858.pdf>`__ for more details.
"""
def __init__(self, penalty: float):
if not isinstance(penalty, float) or not (penalty > 0):
raise ValueError(f"`penalty` has to be a strictly positive float, but is {penalty}")
self.penalty = penalty
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor:
score = torch.gather(scores, 1, input_ids) # changed here
# if score < 0 then repetition penalty has to be multiplied to reduce the previous token probability
score = torch.where(score < 0, score * self.penalty, score / self.penalty)
scores.scatter_(1, input_ids, score) # changed here
return scores
input_ids = torch.randint(0, 10000, (256, 256))
scores = torch.randn(256, 10000)
rep_proc = RepetitionPenaltyLogitsProcessor(1.3)
rep_proc_new = RepetitionPenaltyLogitsProcessorNew(1.3)
assert torch.eq(rep_proc(input_ids, scores), rep_proc_new(input_ids, scores)).all().item(), "Should be equal"
print("Python version:", sys.version)
print("Pytorch version:", torch.__version__, "\n")
print(f"Existing rep_proc impl time for 100 iterations on CPU = {timeit.timeit(lambda: rep_proc(input_ids, scores), number=100)}")
print(f"Proposed rep_proc impl time for 100 iterations on CPU = {timeit.timeit(lambda: rep_proc_new(input_ids, scores), number=100)}\n")
if torch.cuda.is_available():
input_ids = input_ids.cuda()
scores = scores.cuda()
print(f"Existing rep_proc impl time for 100 iterations on GPU = {timeit.timeit(lambda: rep_proc(input_ids, scores), number=100)}")
print(f"Proposed rep_proc impl time for 100 iterations on GPU = {timeit.timeit(lambda: rep_proc_new(input_ids, scores), number=100)}")
```
Timings reported:
```
Python version: 3.7.9 (default, Aug 31 2020, 12:42:55)
[GCC 7.3.0]
Pytorch version: 1.4.0
Existing rep_proc impl time for 100 iterations on CPU = 0.0807734300001357
Proposed rep_proc impl time for 100 iterations on CPU = 0.044223628000054305
Existing rep_proc impl time for 100 iterations on GPU = 0.017542457000217837
Proposed rep_proc impl time for 100 iterations on GPU = 0.00720681400025569
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik, @patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9600/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9600/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9600",
"html_url": "https://github.com/huggingface/transformers/pull/9600",
"diff_url": "https://github.com/huggingface/transformers/pull/9600.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9600.patch",
"merged_at": 1611134582000
} |
https://api.github.com/repos/huggingface/transformers/issues/9599 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9599/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9599/comments | https://api.github.com/repos/huggingface/transformers/issues/9599/events | https://github.com/huggingface/transformers/issues/9599 | 786,094,586 | MDU6SXNzdWU3ODYwOTQ1ODY= | 9,599 | saving the model during run_mlm | {
"login": "lkcao",
"id": 49967236,
"node_id": "MDQ6VXNlcjQ5OTY3MjM2",
"avatar_url": "https://avatars.githubusercontent.com/u/49967236?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lkcao",
"html_url": "https://github.com/lkcao",
"followers_url": "https://api.github.com/users/lkcao/followers",
"following_url": "https://api.github.com/users/lkcao/following{/other_user}",
"gists_url": "https://api.github.com/users/lkcao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lkcao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lkcao/subscriptions",
"organizations_url": "https://api.github.com/users/lkcao/orgs",
"repos_url": "https://api.github.com/users/lkcao/repos",
"events_url": "https://api.github.com/users/lkcao/events{/privacy}",
"received_events_url": "https://api.github.com/users/lkcao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Please avoid spamming the repository with multiple duplicate issues.\r\nAlso, those questions should go in the [forums](https://discuss.huggingface.co/), the issues are kept for bugs and feature requests only.",
"sorry...I created two streams by mistake.\r\n"
] | 1,610 | 1,610 | 1,610 | NONE | null | Hi friends-
I am trying to train a Roberta on a large corpus with a server with time limitation.
Is there any way to save the model like every 3000 steps to keep record of the training, and resume it later?
Really need it with the project…Thanks for helping. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9599/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9599/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9598 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9598/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9598/comments | https://api.github.com/repos/huggingface/transformers/issues/9598/events | https://github.com/huggingface/transformers/issues/9598 | 786,094,294 | MDU6SXNzdWU3ODYwOTQyOTQ= | 9,598 | saving the model during run_mlm.py | {
"login": "lkcao",
"id": 49967236,
"node_id": "MDQ6VXNlcjQ5OTY3MjM2",
"avatar_url": "https://avatars.githubusercontent.com/u/49967236?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lkcao",
"html_url": "https://github.com/lkcao",
"followers_url": "https://api.github.com/users/lkcao/followers",
"following_url": "https://api.github.com/users/lkcao/following{/other_user}",
"gists_url": "https://api.github.com/users/lkcao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lkcao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lkcao/subscriptions",
"organizations_url": "https://api.github.com/users/lkcao/orgs",
"repos_url": "https://api.github.com/users/lkcao/repos",
"events_url": "https://api.github.com/users/lkcao/events{/privacy}",
"received_events_url": "https://api.github.com/users/lkcao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Please avoid spamming the repository with multiple duplicate issues.\r\nAlso, those questions should go in the [forums](https://discuss.huggingface.co/), the issues are kept for bugs and feature requests only."
] | 1,610 | 1,610 | 1,610 | NONE | null | Hi friends-
I am trying to train a Roberta on a large corpus with a server with time limitation.
Is there any way to save the model like every 3000 steps to keep record of the training, and resume it later?
Really need it with the project…Thanks for helping.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9598/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9598/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9597 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9597/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9597/comments | https://api.github.com/repos/huggingface/transformers/issues/9597/events | https://github.com/huggingface/transformers/issues/9597 | 786,091,405 | MDU6SXNzdWU3ODYwOTE0MDU= | 9,597 | [Model Exporting] How to export a fine tuned model to a single pytorch or tensorflow model file? | {
"login": "farazk86",
"id": 33456896,
"node_id": "MDQ6VXNlcjMzNDU2ODk2",
"avatar_url": "https://avatars.githubusercontent.com/u/33456896?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/farazk86",
"html_url": "https://github.com/farazk86",
"followers_url": "https://api.github.com/users/farazk86/followers",
"following_url": "https://api.github.com/users/farazk86/following{/other_user}",
"gists_url": "https://api.github.com/users/farazk86/gists{/gist_id}",
"starred_url": "https://api.github.com/users/farazk86/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/farazk86/subscriptions",
"organizations_url": "https://api.github.com/users/farazk86/orgs",
"repos_url": "https://api.github.com/users/farazk86/repos",
"events_url": "https://api.github.com/users/farazk86/events{/privacy}",
"received_events_url": "https://api.github.com/users/farazk86/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_Note:_ that'd be a better question for the forums at discuss.huggingface.co\r\n\r\nThe `optimizer.pt` is a snapshot of the optimizer's internal state, for inference you can delete it and only keep your `model.bin` (= the weights)",
"Thank you, that is very helpful and for directing me to the forums, I did not even know it existed :)\r\n\r\nAs regards to my other question, is it possible to do the fine tuning in tensorflow? or even export to a tensorflow model? \r\n\r\n or would this discussion be better suited to the forum?\r\n\r\nThanks"
] | 1,610 | 1,610 | 1,610 | NONE | null | Apologies if this is a very basic question, but I just cant seem to find any help or documentation for this online.
I want to use google cloud to generate text from a trained model and the maximum size for a model there is ``500MB``. Currently when finetuning a model the checkpoints folder has the ``model.bin`` file and an ``optimizer.pt`` file. These both are used when loading from pretrained.
Even when using ``distilgpt2`` the combined size of this folder is ~900MB. How do I export this model to its actual documented size of ~400MB. I assume the ``optimizer.pt`` are the weights.
So please can someone help, how do I export a checkpoint to either a tensorflow model to a pytorch model so that I can then use to generate text?
I know the latest release 4.2.0 has the function ``model.save_pretrained()``, but I am using ``transformers==2.8.0`` can a model fine tuned using ``2.8.0`` be exported using the new function?
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9597/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9597/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9596 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9596/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9596/comments | https://api.github.com/repos/huggingface/transformers/issues/9596/events | https://github.com/huggingface/transformers/pull/9596 | 786,054,862 | MDExOlB1bGxSZXF1ZXN0NTU0OTg1MzM1 | 9,596 | Update `past_key_values` in GPT-2 | {
"login": "forest1988",
"id": 2755894,
"node_id": "MDQ6VXNlcjI3NTU4OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2755894?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/forest1988",
"html_url": "https://github.com/forest1988",
"followers_url": "https://api.github.com/users/forest1988/followers",
"following_url": "https://api.github.com/users/forest1988/following{/other_user}",
"gists_url": "https://api.github.com/users/forest1988/gists{/gist_id}",
"starred_url": "https://api.github.com/users/forest1988/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/forest1988/subscriptions",
"organizations_url": "https://api.github.com/users/forest1988/orgs",
"repos_url": "https://api.github.com/users/forest1988/repos",
"events_url": "https://api.github.com/users/forest1988/events{/privacy}",
"received_events_url": "https://api.github.com/users/forest1988/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"CircleCI error messages says as below.\r\n\r\nIn `run_tests_torch`:\r\n``` \r\n-- Docs: https://docs.pytest.org/en/stable/warnings.html\r\n=========================== short test summary info ============================\r\nFAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_beam_sample_generate\r\nFAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_beam_search_generate\r\nFAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_beam_search_generate_dict_outputs_use_cache\r\nFAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_gpt2_gradient_checkpointing\r\nFAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_group_beam_search_generate\r\n==== 5 failed, 4202 passed, 1775 skipped, 744 warnings in 216.47s (0:03:36) ====\r\n\r\nExited with code exit status 1\r\nCircleCI received exit code 1\r\n```\r\n\r\nIn `run_tests_flax`:\r\n```\r\nFAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_beam_sample_generate\r\nFAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_beam_search_generate\r\nFAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_beam_search_generate_dict_outputs_use_cache\r\nFAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_gpt2_gradient_checkpointing\r\nFAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_group_beam_search_generate\r\n==== 5 failed, 4172 passed, 1805 skipped, 751 warnings in 282.27s (0:04:42) ====\r\n\r\nExited with code exit status 1\r\nCircleCI received exit code 1\r\n```\r\n",
"Is there a difference between `past_key_value` and `layer_past`? I understand that they both represent the contents of `past_key_values`, the past of each layer, but are they different?\r\n\r\nI first thought it might be a difference between the Causal language model and the Seq2Seq language model, but it seems that both `past_key_value` and `layer_past` are used in `modeling_bart.py`.\r\n\r\nAnd as for the contents of `layer_past`, should it be named `past_state`, as the following part of `modeling_bart.py` shows?\r\n\r\nhttps://github.com/huggingface/transformers/blob/236cc365aff2512ef773c6b1786555dab6fb182f/src/transformers/models/bart/modeling_bart.py#L1236-L1244",
"I've updated `generation_utils.py`, and it seems `mems` in transfo_xl and xlnet causes a new error.\r\n\r\n```\r\n=========================== short test summary info ============================\r\nFAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_gpt2_gradient_checkpointing\r\nFAILED tests/test_modeling_transfo_xl.py::TransfoXLModelTest::test_beam_sample_generate\r\nFAILED tests/test_modeling_transfo_xl.py::TransfoXLModelTest::test_beam_sample_generate_dict_output\r\nFAILED tests/test_modeling_transfo_xl.py::TransfoXLModelTest::test_beam_search_generate\r\nFAILED tests/test_modeling_transfo_xl.py::TransfoXLModelTest::test_beam_search_generate_dict_output\r\nFAILED tests/test_modeling_transfo_xl.py::TransfoXLModelTest::test_group_beam_search_generate\r\nFAILED tests/test_modeling_transfo_xl.py::TransfoXLModelTest::test_group_beam_search_generate_dict_output\r\nFAILED tests/test_modeling_xlnet.py::XLNetModelTest::test_beam_sample_generate\r\nFAILED tests/test_modeling_xlnet.py::XLNetModelTest::test_beam_sample_generate_dict_output\r\nFAILED tests/test_modeling_xlnet.py::XLNetModelTest::test_beam_search_generate\r\nFAILED tests/test_modeling_xlnet.py::XLNetModelTest::test_beam_search_generate_dict_output\r\nFAILED tests/test_modeling_xlnet.py::XLNetModelTest::test_group_beam_search_generate\r\nFAILED tests/test_modeling_xlnet.py::XLNetModelTest::test_group_beam_search_generate_dict_output\r\n=== 13 failed, 4194 passed, 1775 skipped, 743 warnings in 205.38s (0:03:25) ====\r\n\r\nExited with code exit status 1\r\nCircleCI received exit code 1\r\n```\r\n\r\nhttps://github.com/huggingface/transformers/blob/236cc365aff2512ef773c6b1786555dab6fb182f/src/transformers/models/xlnet/modeling_xlnet.py#L581-L607\r\n\r\nIt seems `mems` is something similar to `past_key_values`. \r\nIs there any difference between these two elements with different names?\r\nAlso, is it safe to change `mems` from `List[torch.Tensor]` to `Tuple[Tuple[torch.Tensor]]`?",
"Hey @forest1988, \r\n\r\nYou're PR looks very nice! Yes, it is expected that `XLNet` and `TransfoXL` fail actually since they also have been using the \"default\" `_reorder_cache` function of `modeling_utils.py`. Could you do the following changes to correct this:\r\n\r\n1) Copy that old `_reorder_cache` (the one before you did your changes) function that was in `generation_utils.py` to both `modeling_xlnet.py` and `modeling_transfo_xl.py` file so that those have the same function as before? \r\n2) Copy the current `_reorder_cache` function of `generation_utils.py` into `modeling_gpt2.py`?\r\n3) Add a default `_reorder_cache` function to `generation_utils.py` that looks as follows:\r\n\r\n```python\r\ndef _reorder_cache(self, past, beam_idx):\r\n raise NotImplementedError(...)\r\n```",
"I've just updated `torch.utils.checkpoint.checkpoint` check in `modeling_gpt2.py`, referring to `modeling_bart.py`.",
"This way it's much cleaner and correct :-) The reason I'm proposing this change is that the `_reorder_cache` function is so different for each model that there should be **no** default function. A default function could confuse people that want to add a new model in a way that they think it works out of the box, but in most cases it just doesn't. A clear error message such as:\r\n\r\n\r\n```python\r\ndef _reorder_cache(self, past, beam_idx):\r\n raise NotImplementedError(f\"Make sure that a `_reorder_cache` function is correctly implemented in {self.__class__.__module__} to enable beam search for {self.__class__}\")\r\n```\r\n```\r\n",
"I think this should solve the problems, let me know if you need more help :-) ",
"Thank you for your advice! I'll update `_reorder_cache` soon and commit it.",
"Hi @patrickvonplaten,\r\n\r\nThanks to your kind advice, I could solve the problem of `_reorder_cache` in `GPT-2`, `XLNet`, `TransfoXL` (, and `CTRL`).\r\nReferring to `modeling_bart.py`, in which `_reorder_cache` is placed in `ConditionalGeneration` Model, I added `_reoder_cache` in `LMHead` Models in each Causal Language Models.\r\n\r\nThe last one remaining bug is:\r\n```\r\nFAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_gpt2_gradient_checkpointing\r\n```\r\n\r\nI think I should modify `test_gpt2_gradient_checkpointing` so that it has `use_cache=False`, or reconsider my previous update and re-modify the usage of `checkpoint` in modeling_gpt2.\r\n> I've just updated torch.utils.checkpoint.checkpoint check in modeling_gpt2.py, referring to modeling_bart.py.\r\n>\r\n",
"All checks have passed!\r\nI appreciate all your help.\r\n\r\nHowever, in the documentation of `_reorder_cache`, there are references to both `past_key_values` and `mems` regardless of which object is used.\r\nI think we can fix that and only mention the one we use, or we can leave the reference to both to show that the aim of the function is the same.\r\nIf there is a need to modify it, please let me know.\r\n",
"Hi @patrickvonplaten,\r\n\r\n> I hope it's fine for you that I went into the PR to do some final fixes. Thanks a lot for cleaning this up :-)\r\n\r\nOf course! Thank you for adding fixes to make this PR more valuable!",
"Awesome, merging - great job @forest1988 !",
"Thank you for your advice and encouraging comments!\r\nIt’s my pleasure to have opened this PR!"
] | 1,610 | 1,611 | 1,611 | CONTRIBUTOR | null | # What does this PR do?
It seems GPT-2 and BartDecoder has a different style of `past_key_values`.
Advised by @patrickvonplaten,
I opened this PR to change GPT-2's cache format from a single tensor to a tuple of 2 tensors.
Once this problem is solved, it is expected that `past_key_values` in GPT-2 will be handled in the same way as in Bart.
Sorry there remain some errors. This PR is [WIP].
I would appreciate your advice on how to update `generation_utils.py`.
Can I modify `_reorder_cache` so that past is replaced from Tuple[torch.Tensor] to Tuple[Tuple[torch.Tensor]],
or should I consider other output variations, output.mem and outputs.past_buckets_states?
Fixes #9391
From patrickvonplaten:
This PR cleans the `_reorder_cache` logic. Now `_reorcher_cache` defaults to an erroneous `NotImplementedError` in `generation_utils.py` forcing the model to implement its corresponding `_rerorder_cache` it the `modeling_...py` file itself. This is cleaner as `_reorder_cache` strongly differs from model to model. In addition, this PR makes sure that `gradient_checkpointing` can only be used if the model is in training mode and makes sure that `use_cache` is disabled when training and `gradient_checkpointing` is enabled to prevent errors.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
GPT2: @LysandreJik, @patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9596/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9596/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9596",
"html_url": "https://github.com/huggingface/transformers/pull/9596",
"diff_url": "https://github.com/huggingface/transformers/pull/9596.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9596.patch",
"merged_at": 1611068415000
} |
https://api.github.com/repos/huggingface/transformers/issues/9595 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9595/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9595/comments | https://api.github.com/repos/huggingface/transformers/issues/9595/events | https://github.com/huggingface/transformers/issues/9595 | 786,034,370 | MDU6SXNzdWU3ODYwMzQzNzA= | 9,595 | Order of inputs (difference between doc and output) | {
"login": "tide90",
"id": 76215845,
"node_id": "MDQ6VXNlcjc2MjE1ODQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/76215845?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tide90",
"html_url": "https://github.com/tide90",
"followers_url": "https://api.github.com/users/tide90/followers",
"following_url": "https://api.github.com/users/tide90/following{/other_user}",
"gists_url": "https://api.github.com/users/tide90/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tide90/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tide90/subscriptions",
"organizations_url": "https://api.github.com/users/tide90/orgs",
"repos_url": "https://api.github.com/users/tide90/repos",
"events_url": "https://api.github.com/users/tide90/events{/privacy}",
"received_events_url": "https://api.github.com/users/tide90/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi! Order doesn’t matter in a dictionary. \r\n\r\nIt only matters if you use the arguments as positional arguments, which is not recommended.",
"@LysandreJik \r\n\r\nSo, order does matter when using lists? What is now the right order?\r\n",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,610 | 1,614 | 1,614 | NONE | null | Hey,
when using a dictionary as model input does the order matters? Eg:
`model({"input_ids": input_ids, "token_type_ids": token_type_ids, "attention_mask": attention_mask})
`
and
`model({"input_ids": input_ids, "attention_mask": attention_mask, "token_type_ids": token_type_ids})
`
When using tokenizer I get a order different from the docstring and arguments order:
```
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
input_query = tokenizer(input_query,max_length=MAX_SEQ_lEN,padding="max_length",truncation=True,return_tensors="tf")
-> {"input_ids": input_ids, "token_type_ids": token_type_ids, "attention_mask": attention_mask}
```
v3.4
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9595/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9595/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9594 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9594/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9594/comments | https://api.github.com/repos/huggingface/transformers/issues/9594/events | https://github.com/huggingface/transformers/issues/9594 | 786,027,159 | MDU6SXNzdWU3ODYwMjcxNTk= | 9,594 | why set masked_bias as -10000 in GPT2 | {
"login": "xu-song",
"id": 13825126,
"node_id": "MDQ6VXNlcjEzODI1MTI2",
"avatar_url": "https://avatars.githubusercontent.com/u/13825126?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xu-song",
"html_url": "https://github.com/xu-song",
"followers_url": "https://api.github.com/users/xu-song/followers",
"following_url": "https://api.github.com/users/xu-song/following{/other_user}",
"gists_url": "https://api.github.com/users/xu-song/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xu-song/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xu-song/subscriptions",
"organizations_url": "https://api.github.com/users/xu-song/orgs",
"repos_url": "https://api.github.com/users/xu-song/repos",
"events_url": "https://api.github.com/users/xu-song/events{/privacy}",
"received_events_url": "https://api.github.com/users/xu-song/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
] | [
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,610 | 1,652 | 1,614 | CONTRIBUTOR | null | ## Information
`masked_bias` is set as `-10000` in GPT2, why not `-inf`?
https://github.com/huggingface/transformers/blob/e43f3b6190cfd98a38912411b8bc8ecbb6629280/src/transformers/models/gpt2/modeling_gpt2.py#L133
## openai/gpt-2
In [openai/gpt2](https://github.com/openai/gpt-2/blob/a74da5d99abaaba920de8131d64da2862a8f213b/src/model.py#L88), the bias is set as `-1e10`
```py
w = w*b - tf.cast(1e10, w.dtype)*(1-b)
```
## Other implementation, such as bert, transformer
https://github.com/huggingface/transformers/blob/82498cbc37d5c15520c7bddde5d804c804eee498/src/transformers/models/bart/modeling_bart.py#L81
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9594/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9594/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9593 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9593/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9593/comments | https://api.github.com/repos/huggingface/transformers/issues/9593/events | https://github.com/huggingface/transformers/issues/9593 | 785,964,828 | MDU6SXNzdWU3ODU5NjQ4Mjg= | 9,593 | Difference in decoded strings between a tokenizer and the corresponding fast tokenizer | {
"login": "chantera",
"id": 1482049,
"node_id": "MDQ6VXNlcjE0ODIwNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1482049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chantera",
"html_url": "https://github.com/chantera",
"followers_url": "https://api.github.com/users/chantera/followers",
"following_url": "https://api.github.com/users/chantera/following{/other_user}",
"gists_url": "https://api.github.com/users/chantera/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chantera/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chantera/subscriptions",
"organizations_url": "https://api.github.com/users/chantera/orgs",
"repos_url": "https://api.github.com/users/chantera/repos",
"events_url": "https://api.github.com/users/chantera/events{/privacy}",
"received_events_url": "https://api.github.com/users/chantera/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"For the `WordPiece` decoder, which is used in `BertTokenizerFast`, It seems that `cleanup` cannot be changed after initialization.\r\nhttps://github.com/huggingface/tokenizers/blob/python-v0.10.0/tokenizers/src/tokenizer/mod.rs#L762\r\nhttps://github.com/huggingface/tokenizers/blob/python-v0.10.0/tokenizers/src/decoders/wordpiece.rs#L35\r\nhttps://github.com/huggingface/tokenizers/blob/python-v0.10.0/bindings/python/py_src/tokenizers/decoders/__init__.pyi#L113\r\nhttps://github.com/huggingface/transformers/blob/v4.2.0/src/transformers/convert_slow_tokenizer.py#L106\r\n\r\nI confirmed that a tokenizer and the fast tokenizer return the same string when they are based on SentencePiece because it treats whitespace as a symbol and can reconstruct the original sentence.\r\nSo when specifying `clean_up_tokenization_spaces=False`, spaces before punctuation depend on `ids`, but there are no differences in the decoded string between a tokenizer (e.g. `T5Tokenizer`) and the fast tokenizer (e.g. `T5TokenizerFast`).",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,610 | 1,614 | 1,614 | NONE | null | ## Environment info
- `transformers` version: 4.2.0
- Platform: Linux-4.15.0-130-generic-x86_64-with-debian-10.5
- Python version: 3.7.8
- PyTorch version (GPU?): 1.7.1+cpu (False)
- Tensorflow version (GPU?): 2.4.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
tokenizers: @mfuntowicz
## Information
I want to feed a word-based sequence to a tokenizer and get a word-based output decoded from logits.
To leave spaces before punctuation marks, I specified `tokenizer.decode(ids, clean_up_tokenization_spaces=False)`, but a fast tokenizer removes such spaces while the corresponding non-fast tokenizer preserves them.
## To reproduce
```py
from transformers import BertTokenizer, BertTokenizerFast
seq = ['Cheerfully', ',', 'Hello', 'World', '!']
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
ids = tokenizer(seq, is_split_into_words=True).input_ids
print(ids) # => [101, 20394, 8284, 5834, 117, 8667, 1291, 106, 102]
print(tokenizer.decode(ids, clean_up_tokenization_spaces=False)) # => [CLS] Cheerfully , Hello World ! [SEP]
tokenizer = BertTokenizerFast.from_pretrained('bert-base-cased')
ids = tokenizer(seq, is_split_into_words=True).input_ids
print(ids) # => [101, 20394, 8284, 5834, 117, 8667, 1291, 106, 102]
print(tokenizer.decode(ids, clean_up_tokenization_spaces=False)) # => [CLS] Cheerfully, Hello World! [SEP]
```
This happens because the underlying tokenizer ([huggingface/tokenizers](https://github.com/huggingface/tokenizers/)) removes them at the [transformers/tokenization_utils_fast.py#L495](https://github.com/huggingface/transformers/blob/v4.2.0/src/transformers/tokenization_utils_fast.py#L495), whether `clean_up_tokenization_spaces` is `True` or `False`.
To avoid this issue, I tried to use `tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(ids))`, but this also did not work.
## Expected behavior
A tokenizer and its corresponding fast tokenizer must return the same decoded string.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9593/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9593/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9592 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9592/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9592/comments | https://api.github.com/repos/huggingface/transformers/issues/9592/events | https://github.com/huggingface/transformers/issues/9592 | 785,928,156 | MDU6SXNzdWU3ODU5MjgxNTY= | 9,592 | disable message "Some layers from the model checkpoint ..." | {
"login": "tide90",
"id": 76215845,
"node_id": "MDQ6VXNlcjc2MjE1ODQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/76215845?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tide90",
"html_url": "https://github.com/tide90",
"followers_url": "https://api.github.com/users/tide90/followers",
"following_url": "https://api.github.com/users/tide90/following{/other_user}",
"gists_url": "https://api.github.com/users/tide90/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tide90/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tide90/subscriptions",
"organizations_url": "https://api.github.com/users/tide90/orgs",
"repos_url": "https://api.github.com/users/tide90/repos",
"events_url": "https://api.github.com/users/tide90/events{/privacy}",
"received_events_url": "https://api.github.com/users/tide90/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You can change the logging level:\r\n\r\n```py\r\nfrom transformers import logging as hf_logging\r\n\r\nhf_logging.set_verbosity_error()\r\n```"
] | 1,610 | 1,610 | 1,610 | NONE | null | I wonder how can I disable this message? v3.4
```
Some layers from the model checkpoint at bert-base-cased were not used when initializing TFBertForSequenceClassification: ['nsp___cls', 'mlm___cls']
- This IS expected if you are initializing TFBertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).
- This IS NOT expected if you are initializing TFBertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some layers of TFBertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['dropout_113', 'classifier']
```
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9592/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9592/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9591 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9591/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9591/comments | https://api.github.com/repos/huggingface/transformers/issues/9591/events | https://github.com/huggingface/transformers/issues/9591 | 785,927,752 | MDU6SXNzdWU3ODU5Mjc3NTI= | 9,591 | disable message "Some layers from the model checkpoint at bert-base-cased were not used when initializing" | {
"login": "tide90",
"id": 76215845,
"node_id": "MDQ6VXNlcjc2MjE1ODQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/76215845?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tide90",
"html_url": "https://github.com/tide90",
"followers_url": "https://api.github.com/users/tide90/followers",
"following_url": "https://api.github.com/users/tide90/following{/other_user}",
"gists_url": "https://api.github.com/users/tide90/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tide90/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tide90/subscriptions",
"organizations_url": "https://api.github.com/users/tide90/orgs",
"repos_url": "https://api.github.com/users/tide90/repos",
"events_url": "https://api.github.com/users/tide90/events{/privacy}",
"received_events_url": "https://api.github.com/users/tide90/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You can change the logging level:\r\n\r\n```\r\nfrom transformers import logging as hf_logging\r\n\r\nhf_logging.set_verbosity_error()\r\n```"
] | 1,610 | 1,610 | 1,610 | NONE | null | I wonder how can I disable this message? v3.4
```
Some layers from the model checkpoint at bert-base-cased were not used when initializing TFBertForSequenceClassification: ['nsp___cls', 'mlm___cls']
- This IS expected if you are initializing TFBertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).
- This IS NOT expected if you are initializing TFBertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some layers of TFBertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['dropout_113', 'classifier']
```
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9591/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9591/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9590 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9590/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9590/comments | https://api.github.com/repos/huggingface/transformers/issues/9590/events | https://github.com/huggingface/transformers/issues/9590 | 785,890,518 | MDU6SXNzdWU3ODU4OTA1MTg= | 9,590 | WARNING:tensorflow:AutoGraph | {
"login": "tide90",
"id": 76215845,
"node_id": "MDQ6VXNlcjc2MjE1ODQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/76215845?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tide90",
"html_url": "https://github.com/tide90",
"followers_url": "https://api.github.com/users/tide90/followers",
"following_url": "https://api.github.com/users/tide90/following{/other_user}",
"gists_url": "https://api.github.com/users/tide90/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tide90/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tide90/subscriptions",
"organizations_url": "https://api.github.com/users/tide90/orgs",
"repos_url": "https://api.github.com/users/tide90/repos",
"events_url": "https://api.github.com/users/tide90/events{/privacy}",
"received_events_url": "https://api.github.com/users/tide90/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hello !\r\n\r\nYou can safely ignore those warnings, no worries.",
"@jplu Thanks. Might the results of training a model may different to new version (like because of the new kind of tokenizer)?\r\n\r\nBut \"why\" do I get the same output as in v3, since in the doc is stated that the ouput structure somehow chnaged and you cannot do unpacking like \r\n\r\n`a, b, c = model(input)\r\n`\r\n\r\nBut which stills works.\r\n\r\nHow can I ignore (disable) this messages? I tried a lot, but nothing worked!\r\n\r\n",
"In graph mode you cannot get tuples anymore, the dict output is forced and you cannot disable this message for now. This will be possible in a future release, as it will be displayed only when you set yourself the `output_attentions`, `output_hidden_states` or `return_dict` yourself in the method call while running your model in graph mode.",
"@jplu Thanky. What do you mean by graph mode? As stated above, I still get the tuples as output?",
"You are not getting tuples, by doing:\r\n```\r\na, b, c = model(input)\r\n```\r\nYou are getting the keys of the dict.\r\n\r\nBy graph mode, I mean TensorFlow graph mode, and not eager mode.",
"@jplu Ah ok, that is why I still get the tuple, because by default in tf2 eager mode is activated, right?",
"Yes eager mode is activated by default, and no you don't get tuples, you get a dict because `return_dict` is set to `True` is all the configs by default.",
"Ôk, but then I am confused why above unpacked worked for, although return dict is true?",
"Because you can unpack a dict, and you get the keys of the dict.",
"This issue has been stale for 1 month."
] | 1,610 | 1,618 | 1,618 | NONE | null | Since v4.2 I get those strange outputs while finetuning a TFBert Model:
Using
`bert_model = TFBertForSequenceClassification.from_pretrained('bert-base-cased')`
```
WARNING:tensorflow:AutoGraph could not transform <bound method Socket.send of <zmq.sugar.socket.Socket object at 0x7fb902ce88d0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: <cyfunction Socket.send at 0x7fb920685d90> is not a module, class, method, function, traceback, frame, or code object
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <bound method Socket.send of <zmq.sugar.socket.Socket object at 0x7fb902ce88d0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: <cyfunction Socket.send at 0x7fb920685d90> is not a module, class, method, function, traceback, frame, or code object
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`
```
I saw that the outputs are now different with return_dict=True in the new version. But I can still use model.predict (using TFBert within a keras model) to get the scores (seems working though)? So I wonder how does this correlate with returning dict when just using the model output via `model(input)`? So using TF Bert with predict gives still old behaviour?
I still get the normal TFSequenceClassifierOutput with the TFBertForSequenceClassification Model? What are now the exact changes with v4?
Would be the training results actually be different with the version 4?
Also, how can I disable above messages?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9590/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9590/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9589 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9589/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9589/comments | https://api.github.com/repos/huggingface/transformers/issues/9589/events | https://github.com/huggingface/transformers/pull/9589 | 785,886,694 | MDExOlB1bGxSZXF1ZXN0NTU0ODQ1NzY2 | 9,589 | Fix conda build | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,610 | 1,610 | 1,610 | MEMBER | null | Conda build started failing when using `conda build`, using `conda-build` fixed this issue. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9589/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9589/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9589",
"html_url": "https://github.com/huggingface/transformers/pull/9589",
"diff_url": "https://github.com/huggingface/transformers/pull/9589.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9589.patch",
"merged_at": 1610621513000
} |
https://api.github.com/repos/huggingface/transformers/issues/9588 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9588/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9588/comments | https://api.github.com/repos/huggingface/transformers/issues/9588/events | https://github.com/huggingface/transformers/issues/9588 | 785,856,455 | MDU6SXNzdWU3ODU4NTY0NTU= | 9,588 | Longformer version of RoBERTa error | {
"login": "adamwawrzynski",
"id": 19324675,
"node_id": "MDQ6VXNlcjE5MzI0Njc1",
"avatar_url": "https://avatars.githubusercontent.com/u/19324675?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adamwawrzynski",
"html_url": "https://github.com/adamwawrzynski",
"followers_url": "https://api.github.com/users/adamwawrzynski/followers",
"following_url": "https://api.github.com/users/adamwawrzynski/following{/other_user}",
"gists_url": "https://api.github.com/users/adamwawrzynski/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adamwawrzynski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adamwawrzynski/subscriptions",
"organizations_url": "https://api.github.com/users/adamwawrzynski/orgs",
"repos_url": "https://api.github.com/users/adamwawrzynski/repos",
"events_url": "https://api.github.com/users/adamwawrzynski/events{/privacy}",
"received_events_url": "https://api.github.com/users/adamwawrzynski/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Comparing codebase of version `3.0.2` and `4.2.0` I have noticed that `forward` function differs. I have added deleted lines right at the beginning of the function:\r\n```python\r\n def forward(\r\n self,\r\n hidden_states,\r\n attention_mask=None,\r\n is_index_masked=None,\r\n is_index_global_attn=None,\r\n is_global_attn=None,\r\n output_attentions=False,\r\n ):\r\n \"\"\"\r\n :class:`LongformerSelfAttention` expects `len(hidden_states)` to be multiple of `attention_window`. Padding to\r\n `attention_window` happens in :meth:`LongformerModel.forward` to avoid redoing the padding on each layer.\r\n\r\n The `attention_mask` is changed in :meth:`LongformerModel.forward` from 0, 1, 2 to:\r\n\r\n * -10000: no attention\r\n * 0: local attention\r\n * +10000: global attention\r\n \"\"\"\r\n attention_mask = attention_mask.squeeze(dim=2).squeeze(dim=1)\r\n\r\n # is index masked or global attention\r\n is_index_masked = attention_mask < 0\r\n is_index_global_attn = attention_mask > 0\r\n is_global_attn = any(is_index_global_attn.flatten())\r\n```\r\n\r\nand now model seems to be working, but returns:\r\n```bash\r\n{'eval_loss': nan, 'eval_runtime': 20.6319, 'eval_samples_per_second': 1.939}\r\n```\r\nBelow You can find results of consecutive steps in `forward` function. Can You see something wrong here?\r\n```bash\r\ndiagonal_mask: tensor([[[[-inf, -inf, -inf, ..., 0., 0., 0.]],\r\n\r\n [[-inf, -inf, -inf, ..., 0., 0., 0.]],\r\n\r\n [[-inf, -inf, -inf, ..., 0., 0., 0.]],\r\n\r\n ...,\r\n\r\n [[0., 0., 0., ..., -inf, -inf, -inf]],\r\n\r\n [[0., 0., 0., ..., -inf, -inf, -inf]],\r\n\r\n [[0., 0., 0., ..., -inf, -inf, -inf]]],\r\n\r\n\r\n [[[-inf, -inf, -inf, ..., 0., 0., 0.]],\r\n\r\n [[-inf, -inf, -inf, ..., 0., 0., 0.]],\r\n\r\n [[-inf, -inf, -inf, ..., 0., 0., 0.]],\r\n\r\n ...,\r\n\r\n [[0., 0., 0., ..., -inf, -inf, -inf]],\r\n\r\n [[0., 0., 0., ..., -inf, -inf, -inf]],\r\n\r\n [[0., 0., 0., ..., -inf, -inf, -inf]]]], device='cuda:0',\r\n dtype=torch.float16)\r\nattn_scores: tensor([[[[ -inf, -inf, -inf, ..., 0.5771, 0.2065, -1.0449],\r\n [ -inf, -inf, -inf, ..., -1.3174, -1.5547, -0.6240],\r\n [ -inf, -inf, -inf, ..., -1.3691, -1.3555, -0.3799],\r\n ...,\r\n [ -inf, -inf, -inf, ..., 1.7402, 1.6152, 0.8242],\r\n [ -inf, -inf, -inf, ..., 0.5122, 1.0342, 0.2091],\r\n [ -inf, -inf, -inf, ..., 1.7568, -0.1534, 0.7505]],\r\n\r\n [[ -inf, -inf, -inf, ..., -0.8066, -1.7480, -2.5527],\r\n [ -inf, -inf, -inf, ..., -3.3652, 0.1046, -0.5811],\r\n [ -inf, -inf, -inf, ..., -0.0958, -1.0957, -0.2377],\r\n ...,\r\n [ -inf, -inf, -inf, ..., -0.4148, -0.9497, -0.1229],\r\n [ -inf, -inf, -inf, ..., -1.9443, -1.3467, -1.5342],\r\n [ -inf, -inf, -inf, ..., 0.1263, -0.4407, 0.1486]],\r\n\r\n [[ -inf, -inf, -inf, ..., -0.9077, -0.1603, -0.5762],\r\n [ -inf, -inf, -inf, ..., -0.2454, 0.1932, -0.5034],\r\n [ -inf, -inf, -inf, ..., -1.4375, -1.2793, -1.0488],\r\n ...,\r\n [ -inf, -inf, -inf, ..., -0.3452, 0.1405, 1.3643],\r\n [ -inf, -inf, -inf, ..., -0.2168, -1.0000, -0.9956],\r\n [ -inf, -inf, -inf, ..., -1.7451, 0.1410, -0.6221]],\r\n\r\n ...,\r\n\r\n [[-1.3965, 0.7798, 0.4707, ..., -inf, -inf, -inf],\r\n [ 0.6260, -0.4146, 0.9180, ..., -inf, -inf, -inf],\r\n [ 0.4807, -1.0742, 1.2803, ..., -inf, -inf, -inf],\r\n ...,\r\n [ 0.0909, 0.8022, -0.4170, ..., -inf, -inf, -inf],\r\n [-2.6035, -1.2988, 0.5586, ..., -inf, -inf, -inf],\r\n [-0.6953, -0.8232, 0.0436, ..., -inf, -inf, -inf]],\r\n\r\n [[ 1.0889, -0.2776, -0.0632, ..., -inf, -inf, -inf],\r\n [-0.4128, 0.4834, -0.3848, ..., -inf, -inf, -inf],\r\n [-0.8794, 0.9150, -1.5107, ..., -inf, -inf, -inf],\r\n ...,\r\n [ 0.8867, -0.4731, 0.3389, ..., -inf, -inf, -inf],\r\n [-0.1365, 0.4905, -2.0000, ..., -inf, -inf, -inf],\r\n [-0.0205, -0.5464, -0.6851, ..., -inf, -inf, -inf]],\r\n\r\n [[ nan, nan, nan, ..., -inf, -inf, -inf],\r\n [ nan, nan, nan, ..., -inf, -inf, -inf],\r\n [ nan, nan, nan, ..., -inf, -inf, -inf],\r\n ...,\r\n [ nan, nan, nan, ..., -inf, -inf, -inf],\r\n [ nan, nan, nan, ..., -inf, -inf, -inf],\r\n [ nan, nan, nan, ..., -inf, -inf, -inf]]],\r\n\r\n\r\n [[[ -inf, -inf, -inf, ..., -4.0469, -2.6270, -5.4805],\r\n [ -inf, -inf, -inf, ..., -0.9312, -0.6743, -1.9688],\r\n [ -inf, -inf, -inf, ..., -0.0593, -0.9507, -0.6392],\r\n ...,\r\n [ -inf, -inf, -inf, ..., 0.3105, 2.3926, 1.0664],\r\n [ -inf, -inf, -inf, ..., -0.0166, 2.2754, 1.0449],\r\n [ -inf, -inf, -inf, ..., -0.4224, 1.7686, -0.2603]],\r\n\r\n [[ -inf, -inf, -inf, ..., -0.5088, -1.2666, -0.4363],\r\n [ -inf, -inf, -inf, ..., -0.3823, -1.7998, -0.4504],\r\n [ -inf, -inf, -inf, ..., -0.1525, 0.1614, -0.0267],\r\n ...,\r\n [ -inf, -inf, -inf, ..., 0.0225, -0.5737, 0.2318],\r\n [ -inf, -inf, -inf, ..., 0.7139, 0.6099, 0.3767],\r\n [ -inf, -inf, -inf, ..., 0.2008, -0.6714, 0.5869]],\r\n\r\n [[ -inf, -inf, -inf, ..., -0.9302, -1.5303, -2.7637],\r\n [ -inf, -inf, -inf, ..., -0.1124, -0.5850, 0.0818],\r\n [ -inf, -inf, -inf, ..., -1.5176, -1.7822, -0.9111],\r\n ...,\r\n [ -inf, -inf, -inf, ..., -0.3618, 0.3486, 0.4368],\r\n [ -inf, -inf, -inf, ..., -0.4158, -1.1660, -0.9106],\r\n [ -inf, -inf, -inf, ..., -0.4636, -0.7012, -0.9570]],\r\n\r\n ...,\r\n\r\n [[-1.0137, -1.2324, -0.2091, ..., -inf, -inf, -inf],\r\n [ 0.0793, 0.1862, -0.6162, ..., -inf, -inf, -inf],\r\n [ 0.2406, 0.1237, -1.0420, ..., -inf, -inf, -inf],\r\n ...,\r\n [ 0.5308, 0.3862, 0.9731, ..., -inf, -inf, -inf],\r\n [-0.5752, -0.8174, 0.4766, ..., -inf, -inf, -inf],\r\n [-0.4299, -0.7031, -0.6240, ..., -inf, -inf, -inf]],\r\n\r\n [[-2.9512, -1.0410, 0.9194, ..., -inf, -inf, -inf],\r\n [-0.0306, -0.8579, 0.1930, ..., -inf, -inf, -inf],\r\n [ 0.2927, -1.4600, -1.6787, ..., -inf, -inf, -inf],\r\n ...,\r\n [ 0.6128, -0.8921, 1.2861, ..., -inf, -inf, -inf],\r\n [-0.7778, -0.8564, 2.3457, ..., -inf, -inf, -inf],\r\n [-0.8877, -1.4834, 0.7783, ..., -inf, -inf, -inf]],\r\n\r\n [[ nan, nan, nan, ..., -inf, -inf, -inf],\r\n [ nan, nan, nan, ..., -inf, -inf, -inf],\r\n [ nan, nan, nan, ..., -inf, -inf, -inf],\r\n ...,\r\n [ nan, nan, nan, ..., -inf, -inf, -inf],\r\n [ nan, nan, nan, ..., -inf, -inf, -inf],\r\n [ nan, nan, nan, ..., -inf, -inf, -inf]]]],\r\n device='cuda:0', dtype=torch.float16)\r\n``` ",
"Hey @adamwawrzynski,\r\n\r\nsadly we cannot maintain `convert_model_to_longformer.py` as I think it's not in the core transformers library `src/transformers/...`. Feel free to ask your question on the forum: https://discuss.huggingface.co/ though - maybe someone from the community wants to help",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,610 | 1,614 | 1,614 | NONE | null | ## Environment info
- `transformers` version: 4.2.0
- Platform: Linux-4.15.0-123-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
albert, bert, GPT2, XLM: @LysandreJik
Longformer/Reformer: @patrickvonplaten
## Information
Model I am using script to initialize Longformer starting from [HerBERT](https://huggingface.co/allegro/herbert-klej-cased-v1)
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Install dependencies: `python3 -m pip install -r requirements.txt`.
2. Install apex according to [official documentation](https://github.com/NVIDIA/apex).
3. Run command `CUDA_VISIBLE_DEVICES=0 python3 convert_model_to_longformer.py --finetune_dataset conllu`.
We are using dataset in `.jsonl` format, each line contains 1 CoNLLu entry. It is converted using custom `LineByLineTextDataset` class to LineByLine format from current version of `transformers`. I've added this class to be able to use it in older version (v3.0.2).
Using suggested by author on [allenai/longformer](https://github.com/allenai/longformer) I've used `transformers` in version `3.0.2` and it works fine. But I would like to use recent models to convert them to Long* version and I can't make conversion script work.
## Result
As a result of running command above with `transformers` in version `4.2.0` I've got:
```bash
Traceback (most recent call last):
File "convert_model_to_longformer.py", line 277, in <module>
pretrain_and_evaluate(
File "convert_model_to_longformer.py", line 165, in pretrain_and_evaluate
eval_loss = trainer.evaluate()
File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/transformers/trainer.py", line 1442, in evaluate
output = self.prediction_loop(
File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/transformers/trainer.py", line 1566, in prediction_loop
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/transformers/trainer.py", line 1670, in prediction_step
outputs = model(**inputs)
File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py", line 1032, in forward
outputs = self.roberta(
File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py", line 798, in forward
encoder_outputs = self.encoder(
File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py", line 498, in forward
layer_outputs = layer_module(
File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py", line 393, in forward
self_attention_outputs = self.attention(
File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py", line 321, in forward
self_outputs = self.self(
File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "convert_model_to_longformer.py", line 63, in forward
return super().forward(hidden_states, attention_mask=attention_mask, output_attentions=output_attentions) # v4.2.0
File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/transformers/models/longformer/modeling_longformer.py", line 600, in forward
diagonal_mask = self._sliding_chunks_query_key_matmul(
File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/transformers/models/longformer/modeling_longformer.py", line 789, in _sliding_chunks_query_key_matmul
batch_size, seq_len, num_heads, head_dim = query.size()
ValueError: too many values to unpack (expected 4)
```
I've changed function `/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/transformers/models/longformer/modeling_longformer.py` up to line 789:
```bash
def forward(
self,
hidden_states,
attention_mask=None,
is_index_masked=None,
is_index_global_attn=None,
is_global_attn=None,
output_attentions=False,
):
"""
:class:`LongformerSelfAttention` expects `len(hidden_states)` to be multiple of `attention_window`. Padding to
`attention_window` happens in :meth:`LongformerModel.forward` to avoid redoing the padding on each layer.
The `attention_mask` is changed in :meth:`LongformerModel.forward` from 0, 1, 2 to:
* -10000: no attention
* 0: local attention
* +10000: global attention
"""
hidden_states = hidden_states.transpose(0, 1)
# project hidden states
query_vectors = self.query(hidden_states)
key_vectors = self.key(hidden_states)
value_vectors = self.value(hidden_states)
print(f"query_vectors: {query_vectors.shape}")
print(f"key_vectors: {key_vectors.shape}")
print(f"value_vectors: {value_vectors.shape}")
print(f"attention_mask: {attention_mask.shape}")
seq_len, batch_size, embed_dim = hidden_states.size()
assert (
embed_dim == self.embed_dim
), f"hidden_states should have embed_dim = {self.embed_dim}, but has {embed_dim}"
# normalize query
query_vectors /= math.sqrt(self.head_dim)
query_vectors = query_vectors.view(seq_len, batch_size, self.num_heads, self.head_dim).transpose(0, 1)
key_vectors = key_vectors.view(seq_len, batch_size, self.num_heads, self.head_dim).transpose(0, 1)
attn_scores = self._sliding_chunks_query_key_matmul(
query_vectors, key_vectors, self.one_sided_attn_window_size
)
# values to pad for attention probs
remove_from_windowed_attention_mask = (attention_mask != 0)[:, :, None, None]
# cast to fp32/fp16 then replace 1's with -inf
float_mask = remove_from_windowed_attention_mask.type_as(query_vectors).masked_fill(
remove_from_windowed_attention_mask, -10000.0
)
print(f"attn_scores: {attn_scores.shape}")
print(f"remove_from_windowed_attention_mask: {remove_from_windowed_attention_mask.shape}")
print(f"float_mask: {float_mask.shape}")
# diagonal mask with zeros everywhere and -inf inplace of padding
diagonal_mask = self._sliding_chunks_query_key_matmul(
float_mask.new_ones(size=float_mask.size()), float_mask, self.one_sided_attn_window_size
)
```
And as a result I've got:
```bash
attention_mask: torch.Size([2, 1, 1, 1024])
query_vectors: torch.Size([1024, 2, 768])
key_vectors: torch.Size([1024, 2, 768])
value_vectors: torch.Size([1024, 2, 768])
attn_scores: torch.Size([2, 1024, 12, 513])
remove_from_windowed_attention_mask: torch.Size([2, 1, 1, 1, 1, 1024])
float_mask: torch.Size([2, 1, 1, 1, 1, 1024])
```
And after changing version to `3.0.2` and adding print statements I've got:
```bash
attention_mask: torch.Size([2, 1024])
query_vectors: torch.Size([1024, 2, 768])
key_vectors: torch.Size([1024, 2, 768])
value_vectors: torch.Size([1024, 2, 768])
attn_scores: torch.Size([2, 1024, 12, 513])
remove_from_windowed_attention_mask: torch.Size([2, 1024, 1, 1])
float_mask: torch.Size([2, 1024, 1, 1])
```
So maybe it's problem with `_sliding_chunks_query_key_matmul` function?
## Files:
convert_model_to_longformer.py, based on [allenai/longformer/scripts/convert_model_to_long.ipynb](https://github.com/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb):
```python3
import logging
import os
import math
import copy
import torch
import argparse
from dataclasses import dataclass, field
from transformers import RobertaForMaskedLM, XLMTokenizer, TextDataset, DataCollatorForLanguageModeling, Trainer, XLMTokenizer, PreTrainedTokenizer
from transformers import TrainingArguments, HfArgumentParser, XLMTokenizer, RobertaModel, XLMTokenizer
from transformers import LongformerSelfAttention # v4.2.0
# from transformers.modeling_longformer import LongformerSelfAttention # v3.0.2
from conllu import load_conllu_dataset, save_conllu_dataset_in_linebyline_format
from torch.utils.data.dataset import Dataset
logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)
class LineByLineTextDataset(Dataset):
"""
This will be superseded by a framework-agnostic approach
soon.
"""
def __init__(self, tokenizer: PreTrainedTokenizer, file_path: str, block_size: int):
assert os.path.isfile(file_path)
# Here, we do not cache the features, operating under the assumption
# that we will soon use fast multithreaded tokenizers from the
# `tokenizers` repo everywhere =)
logger.info("Creating features from dataset file at %s", file_path)
with open(file_path, encoding="utf-8") as f:
lines = [line for line in f.read().splitlines() if (len(line) > 0 and not line.isspace())]
batch_encoding = tokenizer(
lines,
add_special_tokens=True,
truncation=True,
padding="max_length",
max_length=block_size,
pad_to_multiple_of=512)
self.examples = batch_encoding["input_ids"]
def __len__(self):
return len(self.examples)
def __getitem__(self, i) -> torch.Tensor:
return torch.tensor(self.examples[i], dtype=torch.long)
class RobertaLongSelfAttention(LongformerSelfAttention):
def forward(
self,
hidden_states,
attention_mask=None,
head_mask=None,
encoder_hidden_states=None,
encoder_attention_mask=None,
past_key_value=None,
output_attentions=False,
):
return super().forward(hidden_states, attention_mask=attention_mask, output_attentions=output_attentions)
class RobertaLongForMaskedLM(RobertaForMaskedLM):
def __init__(self, config):
super().__init__(config)
for i, layer in enumerate(self.roberta.encoder.layer):
# replace the `modeling_bert.BertSelfAttention` object with `LongformerSelfAttention`
layer.attention.self = RobertaLongSelfAttention(config, layer_id=i)
class RobertaLongModel(RobertaModel):
def __init__(self, config):
super().__init__(config)
for i, layer in enumerate(self.encoder.layer):
# replace the `modeling_bert.BertSelfAttention` object with `LongformerSelfAttention`
layer.attention.self = RobertaLongSelfAttention(config, layer_id=i)
def create_long_model(initialization_model, initialization_tokenizer, save_model_to, attention_window, max_pos):
model = RobertaForMaskedLM.from_pretrained(initialization_model)
tokenizer = XLMTokenizer.from_pretrained(initialization_tokenizer, model_max_length=max_pos)
config = model.config
# extend position embeddings
tokenizer.model_max_length = max_pos
tokenizer.init_kwargs['model_max_length'] = max_pos
current_max_pos, embed_size = model.roberta.embeddings.position_embeddings.weight.shape
max_pos += 2 # NOTE: RoBERTa has positions 0,1 reserved, so embedding size is max position + 2
config.max_position_embeddings = max_pos
assert max_pos > current_max_pos
# allocate a larger position embedding matrix
new_pos_embed = model.roberta.embeddings.position_embeddings.weight.new_empty(max_pos, embed_size)
# copy position embeddings over and over to initialize the new position embeddings
k = 2
step = current_max_pos - 2
while k < max_pos - 1:
new_pos_embed[k:(k + step)] = model.roberta.embeddings.position_embeddings.weight[2:]
k += step
model.roberta.embeddings.position_embeddings.weight.data = new_pos_embed
model.roberta.embeddings.position_ids.data = torch.tensor([i for i in range(max_pos)]).reshape(1, max_pos) # v4.2.0
# model.roberta.embeddings.position_ids = torch.tensor([i for i in range(max_pos)]).reshape(1, max_pos) # v3.0.2
# replace the `modeling_bert.BertSelfAttention` object with `LongformerSelfAttention`
config.attention_window = [attention_window] * config.num_hidden_layers
for i, layer in enumerate(model.roberta.encoder.layer):
longformer_self_attn = LongformerSelfAttention(config, layer_id=i)
longformer_self_attn.query = copy.deepcopy(layer.attention.self.query)
longformer_self_attn.key = copy.deepcopy(layer.attention.self.key)
longformer_self_attn.value = copy.deepcopy(layer.attention.self.value)
longformer_self_attn.query_global = copy.deepcopy(layer.attention.self.query)
longformer_self_attn.key_global = copy.deepcopy(layer.attention.self.key)
longformer_self_attn.value_global = copy.deepcopy(layer.attention.self.value)
layer.attention.self = longformer_self_attn
logger.info(f'saving model to {save_model_to}')
model.save_pretrained(save_model_to)
tokenizer.save_pretrained(save_model_to)
return model, tokenizer
def copy_proj_layers(model):
for i, layer in enumerate(model.roberta.encoder.layer):
layer.attention.self.query_global = copy.deepcopy(layer.attention.self.query)
layer.attention.self.key_global = copy.deepcopy(layer.attention.self.key)
layer.attention.self.value_global = copy.deepcopy(layer.attention.self.value)
return model
def pretrain_and_evaluate(args, model, tokenizer, eval_only, model_path, max_size):
val_dataset = LineByLineTextDataset(
tokenizer=tokenizer,
file_path=args.val_datapath,
block_size=max_size,
)
if eval_only:
train_dataset = val_dataset
else:
logger.info(f'Loading and tokenizing training data is usually slow: {args.train_datapath}')
train_dataset = LineByLineTextDataset(
tokenizer=tokenizer,
file_path=args.train_datapath,
block_size=max_size,
)
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer,
mlm=True,
mlm_probability=0.15,
)
trainer = Trainer(
model=model,
args=args,
data_collator=data_collator,
train_dataset=train_dataset,
eval_dataset=val_dataset,
# prediction_loss_only=True,
)
eval_loss = trainer.evaluate()
eval_loss = eval_loss['eval_loss']
logger.info(f'Initial eval bpc: {eval_loss/math.log(2)}')
exit(0)
if not eval_only:
trainer = Trainer(
model=model,
args=args,
data_collator=data_collator,
train_dataset=train_dataset,
eval_dataset=val_dataset,
prediction_loss_only=False,
)
trainer.train(model_path=model_path)
trainer.save_model()
eval_loss = trainer.evaluate()
eval_loss = eval_loss['eval_loss']
logger.info(f'Eval bpc after pretraining: {eval_loss/math.log(2)}')
@dataclass
class ModelArgs:
attention_window: int = field(default=512, metadata={"help": "Size of attention window"})
max_pos: int = field(default=1024, metadata={"help": "Maximum position"})
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument("--finetune_dataset", required=True, choices=["conllu"], help="Name of dataset to finetune")
return parser.parse_args()
if __name__ == "__main__":
parser = HfArgumentParser((TrainingArguments, ModelArgs,))
args = parse_args()
training_args, model_args = parser.parse_args_into_dataclasses(look_for_args_file=False, args=[
'--output_dir', 'tmp_4.2.0',
'--warmup_steps', '500',
'--learning_rate', '0.00003',
'--weight_decay', '0.01',
'--adam_epsilon', '1e-6',
'--max_steps', '3000',
'--logging_steps', '500',
'--save_steps', '500',
'--max_grad_norm', '5.0',
'--per_device_eval_batch_size', '2',
'--per_device_train_batch_size', '2',
'--gradient_accumulation_steps', '4',
# '--evaluate_during_training',
'--do_train',
'--do_eval',
'--fp16',
'--fp16_opt_level', 'O2',
])
if args.finetune_dataset == "conllu":
saved_dataset = '/server/server_1/user/longformer_summary/conllu/'
if not os.path.exists(saved_dataset):
os.makedirs(saved_dataset)
dataset = load_conllu_dataset('/server/server_1/user/conllu_dataset/')
save_conllu_dataset_in_linebyline_format(dataset, saved_dataset)
training_args.val_datapath = os.path.join(saved_dataset, 'validation.txt')
training_args.train_datapath = os.path.join(saved_dataset, 'train.txt')
initialization_model = 'allegro/herbert-klej-cased-v1'
initialization_tokenizer = 'allegro/herbert-klej-cased-tokenizer-v1'
roberta_base = RobertaForMaskedLM.from_pretrained(initialization_model)
roberta_base_tokenizer = XLMTokenizer.from_pretrained(initialization_tokenizer, model_max_length=512)
model_path = f'{training_args.output_dir}/{initialization_model}-{model_args.max_pos}'
if not os.path.exists(model_path):
os.makedirs(model_path)
logger.info(f'Converting roberta-base into {initialization_model}-{model_args.max_pos}')
model, tokenizer = create_long_model(
initialization_model=initialization_model,
initialization_tokenizer=initialization_tokenizer,
save_model_to=model_path,
attention_window=model_args.attention_window,
max_pos=model_args.max_pos,
)
logger.info(f'Loading the model from {model_path}')
tokenizer = XLMTokenizer.from_pretrained(model_path)
model = RobertaLongForMaskedLM.from_pretrained(model_path)
logger.info(f'Pretraining {initialization_model}-{model_args.max_pos} ... ')
pretrain_and_evaluate(
training_args,
model,
tokenizer,
eval_only=False,
model_path=training_args.output_dir,
max_size=model_args.max_pos,
)
logger.info(f'Copying local projection layers into global projection layers... ')
model = copy_proj_layers(model)
logger.info(f'Saving model to {model_path}')
model.save_pretrained(model_path)
logger.info(f'Loading the model from {model_path}')
tokenizer = XLMTokenizer.from_pretrained(model_path)
model = RobertaLongModel.from_pretrained(model_path)
```
conllu.py
```python3
import re
import glob
import torch
from torch.utils.data import Dataset
import time
import os
import json
from xml.etree.ElementTree import ParseError
import xml.etree.ElementTree as ET
from typing import List, Dict
from sklearn.model_selection import train_test_split
def load_conllu_jsonl(
path: str,
) -> List[Dict[str, str]]:
dataset: List[Dict[str, str]] = list()
with open(path, 'r') as f:
for jsonl in f.readlines():
json_file = json.loads(jsonl)
conllu = json_file['conllu'].split('\n')
doc_text: str = ""
utterance: Dict[str, str] = dict()
for line in conllu:
try:
if line[0].isdigit():
if utterance:
masked_text = utterance["text"]
doc_text = f"{doc_text} {masked_text}.".strip()
utterance = dict()
elif line[0] == '#':
text = line[1:].strip()
key = text.split('=')[0].strip()
value = text.split('=')[1].strip()
utterance[key] = value
except IndexError:
pass
dataset.append({"text": doc_text})
return dataset
def load_conllu_dataset(
path: str,
train_test_val_ratio: float = 0.1,
) -> Dict[str, List[Dict[str, str]]]:
dataset: Dict[str, List[Dict[str, str]]] = dict()
data_dict: Dict[str, List[str]] = dict()
filepath_list = glob.glob(os.path.join(path, '*.jsonl'))
train = filepath_list[:int(len(filepath_list)*0.8)]
test = filepath_list[int(len(filepath_list)*0.8):int(len(filepath_list)*0.9)]
val = filepath_list[int(len(filepath_list)*0.9):]
data_dict["test"] = test
data_dict["train"] = train
data_dict["validation"] = val
for key, value in data_dict.items():
dataset_list: List[Dict[str, str]] = list()
for filepath in value:
data = load_conllu_jsonl(path=filepath)
if data:
dataset_list.extend(data)
dataset[key] = dataset_list
return dataset
def save_conllu_dataset_in_linebyline_format(
dataset: Dict[str, List[Dict[str, str]]],
save_dir: str,
) -> None:
for key, value in dataset.items():
with open(os.path.join(save_dir, f'{key}.txt'), 'w') as f:
for line in value:
# print(line["full"])
f.write(f'{line["text"]}\n')
```
requirements.txt:
```bash
apex @ file:///server/server_1/user/apex
certifi==2020.12.5
chardet==4.0.0
click==7.1.2
datasets==1.2.0
dill==0.3.3
filelock==3.0.12
idna==2.10
joblib==1.0.0
multiprocess==0.70.11.1
numpy==1.19.4
packaging==20.8
pandas==1.2.0
pyarrow==2.0.0
pyparsing==2.4.7
python-dateutil==2.8.1
pytz==2020.5
regex==2020.11.13
requests==2.25.1
sacremoses==0.0.43
sentencepiece==0.1.94
six==1.15.0
tokenizers==0.8.1rc1
torch==1.7.1
tqdm==4.49.0
transformers==3.0.2
typing-extensions==3.7.4.3
urllib3==1.26.2
xxhash==2.0.0
```
## Expected behavior
Model should be converted, saved and loaded. After that it should be properly fine-tuned and saved on disk.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9588/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9588/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9587 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9587/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9587/comments | https://api.github.com/repos/huggingface/transformers/issues/9587/events | https://github.com/huggingface/transformers/issues/9587 | 785,807,931 | MDU6SXNzdWU3ODU4MDc5MzE= | 9,587 | How to fine-tune T5/Bart for other languages on summarization? | {
"login": "HeroadZ",
"id": 17962682,
"node_id": "MDQ6VXNlcjE3OTYyNjgy",
"avatar_url": "https://avatars.githubusercontent.com/u/17962682?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HeroadZ",
"html_url": "https://github.com/HeroadZ",
"followers_url": "https://api.github.com/users/HeroadZ/followers",
"following_url": "https://api.github.com/users/HeroadZ/following{/other_user}",
"gists_url": "https://api.github.com/users/HeroadZ/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HeroadZ/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HeroadZ/subscriptions",
"organizations_url": "https://api.github.com/users/HeroadZ/orgs",
"repos_url": "https://api.github.com/users/HeroadZ/repos",
"events_url": "https://api.github.com/users/HeroadZ/events{/privacy}",
"received_events_url": "https://api.github.com/users/HeroadZ/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! The `BertJapaneseTokenizer` you mention was created specifically for Japanese, so it should encode the Japanese language well.\r\n\r\nYou can find the list of models that have Japanese checkpoints [here](https://huggingface.co/models?filter=ja).",
"Tokenizers can be decoupled from their models, so you can indeed use a BERT tokenizer with a BART model; however, this requires the tokenizer and model to be trained together.",
"Thanks for the quick reply! \r\nI don't know the exact procedure to train tokenizer and model **together**. Could you explain it in detail?"
] | 1,610 | 1,613 | 1,613 | CONTRIBUTOR | null | Assume that I have a Japanese dataset for fine-tuning. How could I fine-tune it?
I think that the original tokenizer like `BartTokenizer` or `T5Tokenizer` can't be used for Japanese, right?
So is it possible to use a Japanese tokenizer like `BertJapaneseTokenizer` to fine-tune a Bart model? Please give me some advice. Thank you very much. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9587/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9587/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9586 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9586/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9586/comments | https://api.github.com/repos/huggingface/transformers/issues/9586/events | https://github.com/huggingface/transformers/pull/9586 | 785,806,346 | MDExOlB1bGxSZXF1ZXN0NTU0Nzc5MDY3 | 9,586 | [bugs] 1. fix chinese_ref column will ignore even we add it in to Datasets. | {
"login": "johnson7788",
"id": 6083466,
"node_id": "MDQ6VXNlcjYwODM0NjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/6083466?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/johnson7788",
"html_url": "https://github.com/johnson7788",
"followers_url": "https://api.github.com/users/johnson7788/followers",
"following_url": "https://api.github.com/users/johnson7788/following{/other_user}",
"gists_url": "https://api.github.com/users/johnson7788/gists{/gist_id}",
"starred_url": "https://api.github.com/users/johnson7788/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johnson7788/subscriptions",
"organizations_url": "https://api.github.com/users/johnson7788/orgs",
"repos_url": "https://api.github.com/users/johnson7788/repos",
"events_url": "https://api.github.com/users/johnson7788/events{/privacy}",
"received_events_url": "https://api.github.com/users/johnson7788/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You can avoid removing that column by setting `remove_unused_columns=False` in your `TrainingArguments`.",
"I will try it,thank you so much "
] | 1,610 | 1,610 | 1,610 | CONTRIBUTOR | null | [bugs] 1. fix chinese_ref column will ignore even we add it in to Datasets.
[bugs] 2. DataCollatorForWholeWordMask e["chinese_ref"] is a list, fix the get length method.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
I follow the examples/language-modeling/run_mlm_wwm.py, chinese whole word mask, found chinese_ref column not used even i add it into datasets, and because it has been removed by trainer.py function _remove_unused_columns()
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9586/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9586/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9586",
"html_url": "https://github.com/huggingface/transformers/pull/9586",
"diff_url": "https://github.com/huggingface/transformers/pull/9586.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9586.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9585 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9585/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9585/comments | https://api.github.com/repos/huggingface/transformers/issues/9585/events | https://github.com/huggingface/transformers/pull/9585 | 785,800,939 | MDExOlB1bGxSZXF1ZXN0NTU0Nzc0NTYw | 9,585 | Gradient accumulation for TFTrainer | {
"login": "kiyoungkim1",
"id": 37245002,
"node_id": "MDQ6VXNlcjM3MjQ1MDAy",
"avatar_url": "https://avatars.githubusercontent.com/u/37245002?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kiyoungkim1",
"html_url": "https://github.com/kiyoungkim1",
"followers_url": "https://api.github.com/users/kiyoungkim1/followers",
"following_url": "https://api.github.com/users/kiyoungkim1/following{/other_user}",
"gists_url": "https://api.github.com/users/kiyoungkim1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kiyoungkim1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kiyoungkim1/subscriptions",
"organizations_url": "https://api.github.com/users/kiyoungkim1/orgs",
"repos_url": "https://api.github.com/users/kiyoungkim1/repos",
"events_url": "https://api.github.com/users/kiyoungkim1/events{/privacy}",
"received_events_url": "https://api.github.com/users/kiyoungkim1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Sorry we have to revert this PR as `labels` is not a dict but a tensor and makes fails all our examples.",
"Thanks a lot for having spotted this cas, a more adapted fix will be available here #9616 very sorry for the inconvenience.",
"Never mind. Thanks again for the fix."
] | 1,610 | 1,610 | 1,610 | CONTRIBUTOR | null | # What does this PR do?
```TFTrainer``` does not work with ```gradient_accumulation_steps``` > 1 (I am doing with ```TFGPT2LMHeadModel```).
Similar treatment of #6479 is done for labels.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
tensorflow: @jplu | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9585/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9585/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9585",
"html_url": "https://github.com/huggingface/transformers/pull/9585",
"diff_url": "https://github.com/huggingface/transformers/pull/9585.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9585.patch",
"merged_at": 1610637400000
} |
https://api.github.com/repos/huggingface/transformers/issues/9584 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9584/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9584/comments | https://api.github.com/repos/huggingface/transformers/issues/9584/events | https://github.com/huggingface/transformers/pull/9584 | 785,792,093 | MDExOlB1bGxSZXF1ZXN0NTU0NzY3MjQz | 9,584 | BatchEncoding.to with device with tests | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,610 | 1,610 | 1,610 | MEMBER | null | Closes https://github.com/huggingface/transformers/issues/9580
The `torch` module isn't imported directly in the `tokenization_utils.py` file. In a similar fashion to the tensor checks, this PR adds a device check to identify if a variable is a torch device.
Adds a test that fails previous to this PR. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9584/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9584/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9584",
"html_url": "https://github.com/huggingface/transformers/pull/9584",
"diff_url": "https://github.com/huggingface/transformers/pull/9584.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9584.patch",
"merged_at": 1610629078000
} |
https://api.github.com/repos/huggingface/transformers/issues/9583 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9583/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9583/comments | https://api.github.com/repos/huggingface/transformers/issues/9583/events | https://github.com/huggingface/transformers/issues/9583 | 785,726,491 | MDU6SXNzdWU3ODU3MjY0OTE= | 9,583 | Custom mask when performing forward pass | {
"login": "hoangphuc1998",
"id": 24960238,
"node_id": "MDQ6VXNlcjI0OTYwMjM4",
"avatar_url": "https://avatars.githubusercontent.com/u/24960238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hoangphuc1998",
"html_url": "https://github.com/hoangphuc1998",
"followers_url": "https://api.github.com/users/hoangphuc1998/followers",
"following_url": "https://api.github.com/users/hoangphuc1998/following{/other_user}",
"gists_url": "https://api.github.com/users/hoangphuc1998/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hoangphuc1998/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hoangphuc1998/subscriptions",
"organizations_url": "https://api.github.com/users/hoangphuc1998/orgs",
"repos_url": "https://api.github.com/users/hoangphuc1998/repos",
"events_url": "https://api.github.com/users/hoangphuc1998/events{/privacy}",
"received_events_url": "https://api.github.com/users/hoangphuc1998/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!"
] | 1,610 | 1,610 | 1,610 | NONE | null | Suppose I have a sequence that consists of 2 sentences separated by \<\/SEP\> tokens like A \<\/SEP\> B. When performing forward pass with RoBERTa model, I want tokens in sentence A only attend to tokens in sentence A and vice versa for sentence B. The mask will be look like this:

In summary, is there any way to explicitly pass a custom attention mask to the model? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9583/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9583/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9582 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9582/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9582/comments | https://api.github.com/repos/huggingface/transformers/issues/9582/events | https://github.com/huggingface/transformers/pull/9582 | 785,670,529 | MDExOlB1bGxSZXF1ZXN0NTU0NjYwMTUx | 9,582 | [deepspeed doc] install issues + 1-gpu deployment | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2659267025,
"node_id": "MDU6TGFiZWwyNjU5MjY3MDI1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed",
"name": "DeepSpeed",
"color": "4D34F7",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Thank you for your awesome suggestions and tweaks - all done."
] | 1,610 | 1,610 | 1,610 | CONTRIBUTOR | null | This PR extends the DeepSpeed/FairScale integration documentation to:
* add extensive general troubleshooting for CUDA-extensions (applies to fairscale, deepspeed, apex or any other python pytorch extension with CUDA C++ code) - these are very likely to be encountered by our users - all notes are based on my first hand encounters with these issues - 2 of which I run into yesterday while trying to build fairscale and deepspeed on Sylvain's hardware which he let me use to run the recent benchmarks. so I figured others are likely to have similar issues and neither fairscale nor deepspeed have these documented anywhere.
* adds deployment for 1 gpu DeepSpeed notes
* reformats sub-headers so that it's easier to link to specific sections
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9582/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9582/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9582",
"html_url": "https://github.com/huggingface/transformers/pull/9582",
"diff_url": "https://github.com/huggingface/transformers/pull/9582.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9582.patch",
"merged_at": 1610651105000
} |
https://api.github.com/repos/huggingface/transformers/issues/9581 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9581/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9581/comments | https://api.github.com/repos/huggingface/transformers/issues/9581/events | https://github.com/huggingface/transformers/issues/9581 | 785,665,522 | MDU6SXNzdWU3ODU2NjU1MjI= | 9,581 | A question about the weight decay | {
"login": "speedcell4",
"id": 3585459,
"node_id": "MDQ6VXNlcjM1ODU0NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3585459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/speedcell4",
"html_url": "https://github.com/speedcell4",
"followers_url": "https://api.github.com/users/speedcell4/followers",
"following_url": "https://api.github.com/users/speedcell4/following{/other_user}",
"gists_url": "https://api.github.com/users/speedcell4/gists{/gist_id}",
"starred_url": "https://api.github.com/users/speedcell4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/speedcell4/subscriptions",
"organizations_url": "https://api.github.com/users/speedcell4/orgs",
"repos_url": "https://api.github.com/users/speedcell4/repos",
"events_url": "https://api.github.com/users/speedcell4/events{/privacy}",
"received_events_url": "https://api.github.com/users/speedcell4/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,610 | 1,610 | 1,610 | NONE | null | https://github.com/huggingface/transformers/blob/7729ef738161a0a182b172fcb7c351f6d2b9c50d/examples/run_squad.py#L90
Should this by `layer_norm.weight`? Even seems you are not using weight decay at all. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9581/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9581/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9580 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9580/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9580/comments | https://api.github.com/repos/huggingface/transformers/issues/9580/events | https://github.com/huggingface/transformers/issues/9580 | 785,575,488 | MDU6SXNzdWU3ODU1NzU0ODg= | 9,580 | BatchEncoding.to() throwing torch NameError in 4.2.0; identical code works in 4.1.1 | {
"login": "KhoomeiK",
"id": 32777448,
"node_id": "MDQ6VXNlcjMyNzc3NDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/32777448?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KhoomeiK",
"html_url": "https://github.com/KhoomeiK",
"followers_url": "https://api.github.com/users/KhoomeiK/followers",
"following_url": "https://api.github.com/users/KhoomeiK/following{/other_user}",
"gists_url": "https://api.github.com/users/KhoomeiK/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KhoomeiK/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KhoomeiK/subscriptions",
"organizations_url": "https://api.github.com/users/KhoomeiK/orgs",
"repos_url": "https://api.github.com/users/KhoomeiK/repos",
"events_url": "https://api.github.com/users/KhoomeiK/events{/privacy}",
"received_events_url": "https://api.github.com/users/KhoomeiK/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, thanks for raising an issue!\r\nIndeed, this is problematic. We're going to do a patch release this morning (v4.2.1) with a fix for this.",
"Ths fix is here: https://github.com/huggingface/transformers/pull/9584\r\n\r\nIt should be merged in a couple of hours, after which we'll release a patch."
] | 1,610 | 1,610 | 1,610 | NONE | null | ## Environment info
- `transformers` version: 4.2.0
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101 (True)
- Tensorflow version (GPU?): 2.4.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Haven't explicitly set up any parallelization other than GPU acceleration and not sure it's relevant since this is an error in the tokenizer
This is on Google Colab with a GPU by the way.
### Who can help
@mfuntowicz (tokenizers)
@sgugger (recent commits to the relevant file)
## Information
Model I am using (Bert, XLNet ...): ALBERT (but problem seems to be in tokenizer)
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
See script in reproduce section.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
Sequence classification, but the problem arises when transporting a BatchEncoding object to a certain torch device.
## To reproduce
Steps to reproduce the behavior:
Run [this colab notebook](https://colab.research.google.com/drive/1Lpu8wE8-1SKGuVLpRhK8VIy4dOWvKteF?usp=sharing). Alternatively...
1. Create a colab instance with GPU acceleration
2. Install torch, sentencepiece, transformers==4.2.0
3. Run the code below
```python
import torch
from transformers import AlbertTokenizer
tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')
tokens = tokenizer('hello world', return_tensors='pt')
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
tokens = tokens.to(device)
```
There is no output when running 4.1.1 (expected) but the output when running 4.2.0 is below:
```
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-2-ad769dc72ebd> in <module>()
6 tokens = tokenizer('hello world', return_tensors='pt')
7 device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
----> 8 tokens = tokens.to(device)
1 frames
/usr/local/lib/python3.6/dist-packages/transformers/file_utils.py in wrapper(*args, **kwargs)
1302 def wrapper(*args, **kwargs):
1303 if is_torch_available():
-> 1304 return func(*args, **kwargs)
1305 else:
1306 raise ImportError(f"Method `{func.__name__}` requires PyTorch.")
/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py in to(self, device)
802 # Otherwise it passes the casts down and casts the LongTensor containing the token idxs
803 # into a HalfTensor
--> 804 if isinstance(device, str) or isinstance(device, torch.device) or isinstance(device, int):
805 self.data = {k: v.to(device=device) for k, v in self.data.items()}
806 else:
NameError: name 'torch' is not defined
```
## Expected behavior
There should be no console output and the tokens should be transferred to the correct device. The code works perfectly fine in version 4.1.1 of `transformers`.
I'll roll back to 4.1.1 for now, looking forward to any updates. Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9580/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9580/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9579 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9579/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9579/comments | https://api.github.com/repos/huggingface/transformers/issues/9579/events | https://github.com/huggingface/transformers/issues/9579 | 785,564,112 | MDU6SXNzdWU3ODU1NjQxMTI= | 9,579 | Some weights of XLMRobertaForMaskedLM were not initialized from the model checkpoint at xlm-roberta-base and are newly initialized | {
"login": "Syavaprd",
"id": 38497601,
"node_id": "MDQ6VXNlcjM4NDk3NjAx",
"avatar_url": "https://avatars.githubusercontent.com/u/38497601?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Syavaprd",
"html_url": "https://github.com/Syavaprd",
"followers_url": "https://api.github.com/users/Syavaprd/followers",
"following_url": "https://api.github.com/users/Syavaprd/following{/other_user}",
"gists_url": "https://api.github.com/users/Syavaprd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Syavaprd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Syavaprd/subscriptions",
"organizations_url": "https://api.github.com/users/Syavaprd/orgs",
"repos_url": "https://api.github.com/users/Syavaprd/repos",
"events_url": "https://api.github.com/users/Syavaprd/events{/privacy}",
"received_events_url": "https://api.github.com/users/Syavaprd/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Could you put the full error in the description of the issue rather than in the title? We don't know which weights are not initialized.",
"> Could you put the full error in the description of the issue rather than in the title? We don't know which weights are not initialized.\r\nCode:\r\nmodel = XLMRobertaForMaskedLM.from_pretrained('xlm-roberta-base')\r\nWarning:\r\nSome weights of XLMRobertaForMaskedLM were not initialized from the model checkpoint at xlm-roberta-base and are newly initialized: ['lm_head.decoder.bias']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.",
"You don't need to worry about that message, it lets you know that the bias in the LM head is not initialized - it will be initialized to all zeros.\r\n\r\nI'm removing this warning in #9615."
] | 1,610 | 1,610 | 1,610 | NONE | null | Where can I find weight that won't give the following error?
The code:
model = XLMRobertaForMaskedLM.from_pretrained('xlm-roberta-base') | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9579/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9579/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9578 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9578/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9578/comments | https://api.github.com/repos/huggingface/transformers/issues/9578/events | https://github.com/huggingface/transformers/pull/9578 | 785,535,078 | MDExOlB1bGxSZXF1ZXN0NTU0NTQ4NzA1 | 9,578 | Fix Trainer with a parallel model | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,610 | 1,610 | 1,610 | COLLABORATOR | null | # What does this PR do?
The test introduced in #9566 wasn't actually working as the default batch size is 8, not 16...
So the problem was still there, the reason because `_setup_devices` in `TrainingArguments` is a `cached_property`, so its result is computed once and for all at init. Had to change the behavior slightly, but it should be okay since it's a private method.
Fixes #9577 (model is getting wrapped into DataParallel because the value of `self.args.n_gpu` is not updated. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9578/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9578/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9578",
"html_url": "https://github.com/huggingface/transformers/pull/9578",
"diff_url": "https://github.com/huggingface/transformers/pull/9578.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9578.patch",
"merged_at": 1610612622000
} |
https://api.github.com/repos/huggingface/transformers/issues/9577 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9577/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9577/comments | https://api.github.com/repos/huggingface/transformers/issues/9577/events | https://github.com/huggingface/transformers/issues/9577 | 785,493,555 | MDU6SXNzdWU3ODU0OTM1NTU= | 9,577 | Trainer is using DataParallel on parallelized models | {
"login": "jncasey",
"id": 31020859,
"node_id": "MDQ6VXNlcjMxMDIwODU5",
"avatar_url": "https://avatars.githubusercontent.com/u/31020859?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jncasey",
"html_url": "https://github.com/jncasey",
"followers_url": "https://api.github.com/users/jncasey/followers",
"following_url": "https://api.github.com/users/jncasey/following{/other_user}",
"gists_url": "https://api.github.com/users/jncasey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jncasey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jncasey/subscriptions",
"organizations_url": "https://api.github.com/users/jncasey/orgs",
"repos_url": "https://api.github.com/users/jncasey/repos",
"events_url": "https://api.github.com/users/jncasey/events{/privacy}",
"received_events_url": "https://api.github.com/users/jncasey/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The `self.args._n_gpu = 1` is to avoid parallelizing the data so it has nothing to do with your problem (and it is right, we can't set `self.args.n_gpu` which is a property but that's a whole different story!)\r\n\r\nHow is your model parallelized? Without that piece of code we can't reproduce the bug and help you.",
"Thanks @sgugger.\r\n\r\nIn my test, I'm using some code originally derived from the run_clm.py example. I'm trying to fine-tune a GPT2 model I've trained from scratch. The model was parallelized with the following lines, and this exact fine-tuning script ran successfully yesterday in 4.1.1, using the `--model_parallel` training arg. \r\n\r\n```\r\n device_map = {0: range(0, 15),\r\n 1: range(15, 32)}\r\n model.parallelize(device_map)\r\n```\r\n\r\nThe error I'm getting now looks a lot like what would happen if I left out the `--model_parallel` flag in 4.1.1.\r\n",
"> RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cuda:1\r\n\r\nPlease post the full trace.\r\n\r\nI have only experimented with t5 and bart MP so far, but gpt2 is supposed to be very similar.\r\n\r\nMost likely the outputs aren't being copied back to the 0th gpu on return, so this won't have anything to do with the trainer. Most likely the issue you encountered has to do with evaluation and not training.\r\n\r\nI had to fix t5-MP to do that, but the PR with the fix hasn't been merged. \r\n\r\nhttps://github.com/huggingface/transformers/blob/58d047a596a97fbb815acb3e657102bf1960b06a/src/transformers/models/t5/modeling_t5.py#L1263-L1266\r\n\r\nI won't be surprised if gpt2 is missing that too.\r\n\r\n`model_parallel_inputs_to_specific_device` is a new function that isn't in master, but part of these 2 PRs: https://github.com/huggingface/transformers/pull/9323 and https://github.com/huggingface/transformers/pull/9384 - it relies on another function - the full new file is here: https://github.com/huggingface/transformers/blob/fe21c43745fcf3f7958c17c2ac461bd784094205/src/transformers/utils/model_parallel_utils.py\r\n\r\nThe current MP implementations are very limited and at the moment I highly recommend you look at DeepSpeed instead, see:\r\nhttps://github.com/huggingface/transformers/issues/8771#issuecomment-759176685 and\r\nhttps://github.com/huggingface/transformers/issues/8771#issuecomment-759248400\r\nYou will need master for that as it was just merged 2 days ago.\r\n\r\nWe also removed `--model_parallel` in trainer master as it wasn't fully baked in first place.",
"@stas00 This is linked to how `TrainingArguments.n_gpu` was computed. Could reproduce and test the fix in #9578 removes the bug.",
"That's easy then. The error though very much reminded me of the issue I described in my comment above.",
"Thanks both!\r\n\r\n@stas00 Definitely excited to check out DeepSpeed – that's the reason I started testing my code in 4.2.0"
] | 1,610 | 1,610 | 1,610 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.2.0
- Platform: Ubuntu 20.04
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1 / CUDA 11.2
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
@sgugger @stas00
## Information
I'm trying out the 4.2.0 release with a training script that had been working in 4.1.1.
I'm parallelizing my model over two GPUs, and I had been using the `--model_parallel` training arg in the previous version. Now that it's no longer used, I removed the arg from my training command, but I'm getting an error as though the DataParallel is being used and the model isn't being detected as parallelized:
`RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cuda:1`
I did some debugging, and everything seems okay with my model (`trainer. is_model_parallel` returns True). But the `trainer. args.n_gpu` is still 2.
I admit that I don't totally understand what's happening in the trainer code, but it might be an error on line 289?
[`self.args._n_gpu = 1`](https://github.com/huggingface/transformers/blob/126fd281bc309ec29caef99e982640265c8a4fba/src/transformers/trainer.py#L289)
Should that be `self.args.n_gpu = 1`, without the leading underscore?
## To reproduce
Steps to reproduce the behavior:
1. Parallelize a model
2. Train on a machine with multiple GPUs
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9577/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9577/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9576 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9576/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9576/comments | https://api.github.com/repos/huggingface/transformers/issues/9576/events | https://github.com/huggingface/transformers/issues/9576 | 785,481,617 | MDU6SXNzdWU3ODU0ODE2MTc= | 9,576 | Pipeline - Truncation Keyword not Recognized | {
"login": "CMobley7",
"id": 10121829,
"node_id": "MDQ6VXNlcjEwMTIxODI5",
"avatar_url": "https://avatars.githubusercontent.com/u/10121829?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CMobley7",
"html_url": "https://github.com/CMobley7",
"followers_url": "https://api.github.com/users/CMobley7/followers",
"following_url": "https://api.github.com/users/CMobley7/following{/other_user}",
"gists_url": "https://api.github.com/users/CMobley7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CMobley7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CMobley7/subscriptions",
"organizations_url": "https://api.github.com/users/CMobley7/orgs",
"repos_url": "https://api.github.com/users/CMobley7/repos",
"events_url": "https://api.github.com/users/CMobley7/events{/privacy}",
"received_events_url": "https://api.github.com/users/CMobley7/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The [documentation](https://huggingface.co/transformers/main_classes/pipelines.html#transformers.pipeline) of the pipeline function clearly shows the `truncation` argument is not accepted, so i'm not sure why you are filing this as a bug.\r\n\r\nThe `__call__` method of a class is not what is used when you create it but when you... well, call it. So `results = nlp(narratives, **kwargs)` will probably work better.",
"@sgugger , you're right. Thanks for the quick response. Sorry, while I looked at https://github.com/huggingface/transformers/pull/9432, I didn't look close enough at https://github.com/huggingface/transformers/blob/master/tests/test_pipelines_summarization.py#L78 or the updated docs. It works now! Thanks @Narsil for adding this feature."
] | 1,610 | 1,610 | 1,610 | NONE | null | ## Environment info
- `transformers` version: 4.2.0
- Platform: Linux-5.4.0-58-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.1+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
and
- `transformers` version: 4.2.0
- Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@Narsil @sgugger @LysandreJik
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
I tried to run both of the code snippets below and got the following error. The pipeline code looks like it should pass everything through correctly, but it doesn't. Maybe the __call__ function needs to be setup as it is in https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/text2text_generation.py#L59.
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
from transformers.tokenization_utils import TruncationStrategy
model = AutoModelForSequenceClassification.from_pretrained("/path/to/model/dir")
tokenizer = AutoTokenizer.from_pretrained("/path/to/model/dir")
nlp = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer, return_all_scores=True, truncation=TruncationStrategy.LONGEST_FIRST)
results = nlp(narratives)
```
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
from transformers.tokenization_utils import TruncationStrategy
kwargs = {}
kwargs["truncation"] = TruncationStrategy.LONGEST_FIRST
model = AutoModelForSequenceClassification.from_pretrained("/path/to/model/dir")
tokenizer = AutoTokenizer.from_pretrained("/path/to/model/dir")
nlp = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer, return_all_scores=True, **kwargs)
results = nlp(narratives)
```
```
Traceback (most recent call last):
File "/ptce/evaluate.py", line 102, in <module>
nlp = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer, return_all_scores=True, truncation=TruncationStrategy.LONGEST_FIRST)
File "/usr/local/lib/python3.6/dist-packages/transformers/pipelines/__init__.py", line 418, in pipeline
return task_class(model=model, tokenizer=tokenizer, modelcard=modelcard, framework=framework, task=task, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/pipelines/text_classification.py", line 39, in __init__
super().__init__(**kwargs)
TypeError: __init__() got an unexpected keyword argument 'truncation'
```
## Expected behavior
For `truncation` to pass from `super().__call__(*args, **kwargs)` to `__call__(self, *args, **kwargs)` and then to `_parse_and_tokenize(self, inputs, padding=True, add_special_tokens=True, truncation=TruncationStrategy.DO_NOT_TRUNCATE, **kwargs)` where the default value is overwritten and text longer than max_sequence_length narratives are truncated.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9576/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9576/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9575 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9575/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9575/comments | https://api.github.com/repos/huggingface/transformers/issues/9575/events | https://github.com/huggingface/transformers/issues/9575 | 785,472,942 | MDU6SXNzdWU3ODU0NzI5NDI= | 9,575 | Converting original BERT tf checkpoints to BertForMaskedLM | {
"login": "ethch18",
"id": 12580176,
"node_id": "MDQ6VXNlcjEyNTgwMTc2",
"avatar_url": "https://avatars.githubusercontent.com/u/12580176?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ethch18",
"html_url": "https://github.com/ethch18",
"followers_url": "https://api.github.com/users/ethch18/followers",
"following_url": "https://api.github.com/users/ethch18/following{/other_user}",
"gists_url": "https://api.github.com/users/ethch18/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ethch18/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ethch18/subscriptions",
"organizations_url": "https://api.github.com/users/ethch18/orgs",
"repos_url": "https://api.github.com/users/ethch18/repos",
"events_url": "https://api.github.com/users/ethch18/events{/privacy}",
"received_events_url": "https://api.github.com/users/ethch18/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The `BertForPreTraining` contains two heads: the NSP and the MLM heads. Therefore, by using the tf1 conversion script, you're already porting the entire model!\r\n\r\nFor the configuration, it should align pretty seamlessly to Google's configurations, but you can check the expected field here: https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/configuration_bert.py#L120-L138",
"Great, thank you! For anyone who's curious, I ended up updating my config file with a few extra fields from here to make `Auto*` detection work: https://huggingface.co/bert-base-multilingual-cased/blob/main/config.json"
] | 1,610 | 1,610 | 1,610 | CONTRIBUTOR | null | Hi! I have some BERT models that I've trained using the original Google code for BERT, and I was hoping to port them over to `transformers`. I noticed that there are two scripts to do this conversion: one for [the original tf1.x code](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/convert_bert_original_tf_checkpoint_to_pytorch.py), and one for [the new tf2 code](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/convert_bert_original_tf2_checkpoint_to_pytorch.py).
I noticed that tf2 conversion script has the following comment:
> You may adapt this script to include classification/MLM/NSP/etc. heads.
I'm using tf1.x, and that comment isn't the tf1.x conversion script. However, the output model is a `BertForPreTraining`, and I'd like to port the entire MLM head over too. I'm assuming that I'd need to somehow get a `BertForMaskedLM` in order to keep the MLM head.
Questions:
1. Would I have to make any modifications to the tf1.x conversion script other than swapping `BertForPreTraining` -> `BertForMaskedLM`?
2. I also noticed that the BERT configs on the model hub are slightly different than the original Google configs. Is there any additional processing that I'd need to do to convert my configs too, so that they can be loaded by the `Auto*` classes?
Thank you! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9575/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9575/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9574 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9574/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9574/comments | https://api.github.com/repos/huggingface/transformers/issues/9574/events | https://github.com/huggingface/transformers/pull/9574 | 785,448,387 | MDExOlB1bGxSZXF1ZXN0NTU0NDc2MDEw | 9,574 | Upstream (and rename) sortish sampler | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,610 | 1,610 | 1,610 | COLLABORATOR | null | # What does this PR do?
This PR moves the logic of the "sortish sampler" from examples/seq2seq utils to `trainer_pt_utils` to make this behavior available for all types of training in the main `Trainer`. It also fixes a bug in the previous implementation of the distributed sortish sampler that did not synchronize the random generator used for the shuffling (thus the data returned on the two processes joined together was not a permutation of the whole dataset).
The sortish sampler logic is to group items of the training datasets that have similar lengths together to minimize padding while retaining a bit of randomness. It does some sorting for this, but that's not the main feature, and it's unclear for anyone reading it what it might do, so the argument name was badly chose in my opinion. I chose to name it `group_by_length` when introducing it in `TrainingArguments` (while keeping the old `sortish_sampler` argument in `Seq2SeqTrainingArguments` for backward compatibility, for now).
The actual samplers are given by the two introduced classes `LengthGroupedSampler` and `DistributedLengthGroupedSampler`. They are both tested in the test, and in particular, the distributed one has a test it is using the same random generator for the bit of randomness.
Renaming the old `sortish_sampler` arg is just done in the tests of seq2seq examples for now, it will be done more generally when the seq2seq finetuning script is rewritten to use `datasets`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9574/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/9574/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9574",
"html_url": "https://github.com/huggingface/transformers/pull/9574",
"diff_url": "https://github.com/huggingface/transformers/pull/9574.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9574.patch",
"merged_at": 1610638694000
} |
https://api.github.com/repos/huggingface/transformers/issues/9573 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9573/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9573/comments | https://api.github.com/repos/huggingface/transformers/issues/9573/events | https://github.com/huggingface/transformers/issues/9573 | 785,390,325 | MDU6SXNzdWU3ODUzOTAzMjU= | 9,573 | Multilingual MiniLM | {
"login": "rodrigoheck",
"id": 29047455,
"node_id": "MDQ6VXNlcjI5MDQ3NDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/29047455?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rodrigoheck",
"html_url": "https://github.com/rodrigoheck",
"followers_url": "https://api.github.com/users/rodrigoheck/followers",
"following_url": "https://api.github.com/users/rodrigoheck/following{/other_user}",
"gists_url": "https://api.github.com/users/rodrigoheck/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rodrigoheck/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rodrigoheck/subscriptions",
"organizations_url": "https://api.github.com/users/rodrigoheck/orgs",
"repos_url": "https://api.github.com/users/rodrigoheck/repos",
"events_url": "https://api.github.com/users/rodrigoheck/events{/privacy}",
"received_events_url": "https://api.github.com/users/rodrigoheck/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Alright! I found the solution. For the tokenizer, XLMRobertaTokenizer should be used instead of AutoTokenizer. ",
"Thanks for reporting. Now that we have model versioning, the author(s) of [`\"microsoft/Multilingual-MiniLM-L12-H384\"`](https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384) could update the model's config.json to specify a `tokenizer_class` so that AutoTokenizer works out of the box.\r\n\r\n@JetRunner @LysandreJik do you remember who the model author(s) are?",
"I think I did the uploading. I'll update the config tomorrow!",
"I believe the config is not yet updated because the error is still there",
"@sersoage Re-uploading it now. Thanks for the note!",
"Done",
"@JetRunner Thank you!"
] | 1,610 | 1,622 | 1,610 | NONE | null | Hello everyone!
I am trying to load this model from Microsoft using the path provided [here](huggingface.co/microsoft/Multilingual-MiniLM-L12-H384). I am applying the same code provided there:
`tokenizer = AutoTokenizer.from_pretrained("microsoft/Multilingual-MiniLM-L12-H384")`
But I am facing this error message:
`stat: path should be string, bytes, os.PathLike or integer, not NoneType`
My intuition says that the model is not correctly stored on the server, but I am not sure. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9573/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9573/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9572 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9572/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9572/comments | https://api.github.com/repos/huggingface/transformers/issues/9572/events | https://github.com/huggingface/transformers/issues/9572 | 785,338,804 | MDU6SXNzdWU3ODUzMzg4MDQ= | 9,572 | How to train the models in smaller spochs | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!",
"yes,. I asked but forum is inactive, please do not close it I really need\nhelp on this.\n\nOn Thu, Jan 14, 2021 at 9:35 AM Lysandre Debut <[email protected]>\nwrote:\n\n> Hello, thanks for opening an issue! We try to keep the github issues for\n> bugs/feature requests.\n> Could you ask your question on the forum <https://discusss.huggingface.co>\n> instead?\n>\n> Thanks!\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/9572#issuecomment-760020852>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AM3GZM3PY5THOZ4TKXEDGB3SZ2UFXANCNFSM4WBHELJA>\n> .\n>\n",
"please do not close it I really need help on this and forrum is inactive",
"Our github issue policy is detailed in the related [ISSUE.md](https://github.com/huggingface/transformers/blob/master/ISSUES.md) and very clearly indicate we reserve the issue tracker for bugs and features request which this issue is not, as well as several other issues you have recently opened.\r\n\r\nFeel free to post here a link to the thread you should open on the forum if you want to be visible in both location (though this should stay very exceptional).\r\n\r\nThe forum is NOT inactive, I see that you already have several answers to your [related post](https://discuss.huggingface.co/t/training-models-for-smaller-epochs-and-then-continue-trianing/3153). This is the place for discussion. NOT here in the issues.\r\n\r\nOverall, please note that if you persist in not following the guidelines and open-source collaboration policies that we have defined and shared with the community on the repository in the [CODE_OF_CONDUCT](https://github.com/huggingface/transformers/blob/master/CODE_OF_CONDUCT.md), the [CONTRIBUTING](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) and [ISSUES](https://github.com/huggingface/transformers/blob/master/ISSUES.md) documents, we reserve the right to take the moderation actions advised by GitHub in the [Community guidelines](https://docs.github.com/en/free-pro-team@latest/github/site-policy/github-community-guidelines).",
"Hi Thomas,\nI did received some response, but those response were not helpful, this\nneeds a response from someone developed the codes, to know the small\ndetails which can help,\nplease assist me with the issue, is there a way I could get a better help\nin forume? so someone with more knowledge see the question?\nthanks\n\nOn Thu, Jan 14, 2021 at 1:54 PM Thomas Wolf <[email protected]>\nwrote:\n\n> Our github issue policy is detailed in the related ISSUE.md\n> <https://github.com/huggingface/transformers/blob/master/ISSUES.md> and\n> very clearly indicate we reserve the issue tracker for bugs and features\n> request which this issue is not, as well as several other issues you have\n> recently opened.\n>\n> Feel free to post here a link to the thread you should open on the forum\n> if you want to be visible in both location (though this should stay very\n> exceptional).\n>\n> The forum is NOT inactive, I see that you already have several answers to\n> your related post\n> <https://discuss.huggingface.co/t/training-models-for-smaller-epochs-and-then-continue-trianing/3153>.\n> This is the place for discussion. NOT here in the issues.\n>\n> Overall, please note that if you persist in not following the guidelines\n> and open-source collaboration policies that we have defined and shared with\n> the community on the repository in the CODE_OF_CONDUCT\n> <https://github.com/huggingface/transformers/blob/master/CODE_OF_CONDUCT.md>,\n> the CONTRIBUTING\n> <https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md>\n> and ISSUES\n> <https://github.com/huggingface/transformers/blob/master/ISSUES.md>\n> documents, we reserve the right to take the moderation actions advised by\n> GitHub in the Community guidelines\n> <https://docs.github.com/en/free-pro-team@latest/github/site-policy/github-community-guidelines>\n> .\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/9572#issuecomment-760178500>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AM3GZM7FRFC7SZF7DBIGIZLSZ3SPHANCNFSM4WBHELJA>\n> .\n>\n",
"To me this question can safely be marked as a bug, since if one trains a\nfinetune_trainer.py till some epochs and start retraining from the saved\ncheckpoints, this does not get the same accuracy as full training, can I\nfile a bug for this issue? thanks\n\nOn Sat, Jan 16, 2021 at 12:13 PM julia hane <[email protected]> wrote:\n\n> Hi Thomas,\n> I did received some response, but those response were not helpful, this\n> needs a response from someone developed the codes, to know the small\n> details which can help,\n> please assist me with the issue, is there a way I could get a better help\n> in forume? so someone with more knowledge see the question?\n> thanks\n>\n> On Thu, Jan 14, 2021 at 1:54 PM Thomas Wolf <[email protected]>\n> wrote:\n>\n>> Our github issue policy is detailed in the related ISSUE.md\n>> <https://github.com/huggingface/transformers/blob/master/ISSUES.md> and\n>> very clearly indicate we reserve the issue tracker for bugs and features\n>> request which this issue is not, as well as several other issues you have\n>> recently opened.\n>>\n>> Feel free to post here a link to the thread you should open on the forum\n>> if you want to be visible in both location (though this should stay very\n>> exceptional).\n>>\n>> The forum is NOT inactive, I see that you already have several answers to\n>> your related post\n>> <https://discuss.huggingface.co/t/training-models-for-smaller-epochs-and-then-continue-trianing/3153>.\n>> This is the place for discussion. NOT here in the issues.\n>>\n>> Overall, please note that if you persist in not following the guidelines\n>> and open-source collaboration policies that we have defined and shared with\n>> the community on the repository in the CODE_OF_CONDUCT\n>> <https://github.com/huggingface/transformers/blob/master/CODE_OF_CONDUCT.md>,\n>> the CONTRIBUTING\n>> <https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md>\n>> and ISSUES\n>> <https://github.com/huggingface/transformers/blob/master/ISSUES.md>\n>> documents, we reserve the right to take the moderation actions advised by\n>> GitHub in the Community guidelines\n>> <https://docs.github.com/en/free-pro-team@latest/github/site-policy/github-community-guidelines>\n>> .\n>>\n>> —\n>> You are receiving this because you authored the thread.\n>> Reply to this email directly, view it on GitHub\n>> <https://github.com/huggingface/transformers/issues/9572#issuecomment-760178500>,\n>> or unsubscribe\n>> <https://github.com/notifications/unsubscribe-auth/AM3GZM7FRFC7SZF7DBIGIZLSZ3SPHANCNFSM4WBHELJA>\n>> .\n>>\n>\n",
"The maintainers have very clearly labelled it as NOT a bug ([here](https://github.com/huggingface/transformers/issues/9572#issuecomment-760020852)).\r\n\r\nWhat you are asking here is a free consultation which unfortunately we don't provide at HuggingFace so here is the next step I think are the most adapted in the present case:\r\n- you've probably reached the limit of the amount of consulting the community could provide for free, the best would be now to hire a consultant to build a solution for you\r\n- regarding the current issue and your usage of the open-source repository, my mission is now to step up and protect the maintainers from the reprository to be able to conduct their mission of maintaining the repository and not diverging from their task to conduct free consulting missions. As such this is now the second warning I send you to follow the guidelines and open-source collaboration policies that we have defined and shared with the community on the repository in the CODE_OF_CONDUCT, the CONTRIBUTING and ISSUES documents (see my message [here](https://github.com/huggingface/transformers/issues/9572#issuecomment-760178500)).\r\n- Which lead us to the last point: if I have to spend more time and send you a third warning to use the repository as it was designed for the community I will have to limit your ability to open issues on our repositories following the GitHub Community guidelines as I explained to you in my message [here](https://github.com/huggingface/transformers/issues/9572#issuecomment-760178500). That would be the first time I have to do that with a community of thousand of people so please just use our forum tools like we designed them according to our guidelines stated [here](https://github.com/huggingface/transformers/issues/9572#issuecomment-760178500)."
] | 1,610 | 1,610 | 1,610 | NONE | null | Hi
I am under low compute hours, could you tell me how I can train finetune_trainer.py for smaller iterations and then continue retraining from the saved checkpoint to reproduce the same results as full training? what are the cares to be taken, and things to pay attention, thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9572/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9572/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9571 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9571/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9571/comments | https://api.github.com/repos/huggingface/transformers/issues/9571/events | https://github.com/huggingface/transformers/issues/9571 | 785,295,228 | MDU6SXNzdWU3ODUyOTUyMjg= | 9,571 | Tensorflow pretrained FlauBERT mixed precision error | {
"login": "widerspruchs",
"id": 24782312,
"node_id": "MDQ6VXNlcjI0NzgyMzEy",
"avatar_url": "https://avatars.githubusercontent.com/u/24782312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/widerspruchs",
"html_url": "https://github.com/widerspruchs",
"followers_url": "https://api.github.com/users/widerspruchs/followers",
"following_url": "https://api.github.com/users/widerspruchs/following{/other_user}",
"gists_url": "https://api.github.com/users/widerspruchs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/widerspruchs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/widerspruchs/subscriptions",
"organizations_url": "https://api.github.com/users/widerspruchs/orgs",
"repos_url": "https://api.github.com/users/widerspruchs/repos",
"events_url": "https://api.github.com/users/widerspruchs/events{/privacy}",
"received_events_url": "https://api.github.com/users/widerspruchs/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hello!\r\n\r\nUnfortunately the TF models are not yet compliant with the \"float16\" mixed precision. This is our main goal for the next release (the one after 4.2.X) as we are actively working on this.\r\n\r\nSorry the the inconvenience. I will update this post once done. ",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,610 | 1,614 | 1,614 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.0.0
- Python version: 3.7.6
- Tensorflow version (GPU?): 2.3.0
- Using GPU in script?: Yes - GPU Tesla V100-SXM2-16GB, compute capability 7.0
- Using distributed or parallel set-up in script?: No
### Who can help
@jplu
## Information
Model I am using (Bert, XLNet ...): "jplu/tf-flaubert-small-cased"
The problem arises when using:
* [ x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task:
* [x ] my own task or dataset: text classification
## To reproduce
Steps to reproduce the behavior:
1. Set the dtype policies to mixed precision "float16" with tensorflow
2. Load pre-trained tensorflow flaubert model ("jplu/tf-flaubert-small-cased")
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
**After executing the following code:**
```python
from transformers import TFFlaubertModel
import tensorflow as tf
from tensorflow.keras.mixed_precision import experimental as mixed_precision
policy = mixed_precision.Policy('mixed_float16')
mixed_precision.set_policy(policy)
model_name = "jplu/tf-flaubert-small-cased"
model = TFFlaubertModel.from_pretrained(model_name)
```
**I got the following error:**
```
InvalidArgumentError: cannot compute AddV2 as input #1(zero-based) was expected to be a half tensor but is a float tensor [Op:AddV2]
```
## Expected behavior
There should not be any problem. When I run "bert-base-case" pretrained model it works perfectly (the bottom code does not return any error)
```python
from transformers import TFBertModel
import tensorflow as tf
from tensorflow.keras.mixed_precision import experimental as mixed_precision
policy = mixed_precision.Policy('mixed_float16')
mixed_precision.set_policy(policy)
model_name = "bert-base-cased"
model = TFBertModel.from_pretrained(model_name)
```
Maybe there is an issue about hard-coded uses of float32 in FlauBERT ? and it is not fixed yet unlike other models ?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9571/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9571/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9570 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9570/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9570/comments | https://api.github.com/repos/huggingface/transformers/issues/9570/events | https://github.com/huggingface/transformers/pull/9570 | 785,268,285 | MDExOlB1bGxSZXF1ZXN0NTU0MzIyODAw | 9,570 | Compliancy with tf-nightly | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Ok, just restored the previous version checking."
] | 1,610 | 1,610 | 1,610 | CONTRIBUTOR | null | # What does this PR do?
This PR makes the lib able to be used with the nightly builds of TensorFlow. And fix an issue with the min TensorFlow version. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9570/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9570/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9570",
"html_url": "https://github.com/huggingface/transformers/pull/9570",
"diff_url": "https://github.com/huggingface/transformers/pull/9570.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9570.patch",
"merged_at": 1610616936000
} |
https://api.github.com/repos/huggingface/transformers/issues/9569 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9569/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9569/comments | https://api.github.com/repos/huggingface/transformers/issues/9569/events | https://github.com/huggingface/transformers/pull/9569 | 785,247,520 | MDExOlB1bGxSZXF1ZXN0NTU0MzA1NTUz | 9,569 | Add head_mask/decoder_head_mask for BART | {
"login": "stancld",
"id": 46073029,
"node_id": "MDQ6VXNlcjQ2MDczMDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/46073029?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stancld",
"html_url": "https://github.com/stancld",
"followers_url": "https://api.github.com/users/stancld/followers",
"following_url": "https://api.github.com/users/stancld/following{/other_user}",
"gists_url": "https://api.github.com/users/stancld/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stancld/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stancld/subscriptions",
"organizations_url": "https://api.github.com/users/stancld/orgs",
"repos_url": "https://api.github.com/users/stancld/repos",
"events_url": "https://api.github.com/users/stancld/events{/privacy}",
"received_events_url": "https://api.github.com/users/stancld/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for opening a new PR. Let me know if you need a review (It's also ok if I go into the PR and fix some things if your stuck :-) )",
"@patrickvonplaten I hope this PR is again ready for review. The only thing remaining to resolve is that issue in `test_headmasking` described above. Currently, I've been trying to fix this one, but I'll be grateful for sure if you can have a look at that too :)",
"Hey @patrickvonplaten. I would like to inform you I fixed `test_headmasking` for BART-based. The problem was that code inside\r\n```\r\nself.assertNotEqual(attentions[1][..., 0, :, :].flatten().sum().item(), 0.0)\r\n```\r\npointed to the last layer of encoder/decoder (encoder-decoder models have only 2 layers in each module while BERT has 5 layers during testing). At the end of the day, this condition was invalid for BART-based models considering the `head_mask` to be\r\n```\r\nhead_mask = torch.ones(\r\n self.model_tester.num_hidden_layers,\r\n self.model_tester.num_attention_heads,\r\n device=torch_device,\r\n)\r\nhead_mask[0, 0] = 0\r\nhead_mask[-1, :-1] = 0\r\n```\r\n\r\nI hope this PR is then ready for review."
] | 1,610 | 1,611 | 1,610 | CONTRIBUTOR | null | This PR implement `head_mask` and `decoder_head_mask` for PyTorch BART-based models. The full list, please, see below:
- **BART**
- **MBart**
- **Blenderbot**
- **BlenderbotSmall**
- **Marian**
- **Pegasus**
This PR is a follow up on the closed PR #9404.
**Motivation**:
According to HuggingFace's websites "There is a growing field of study concerned with investigating the inner working of large-scale transformers like BERT (that some call “BERTology”)." This PR enables to mask attention heads in encoder and decoder models exactly like for BERT. This PR thus creates an opportunity to study the importance of attention heads in encoder-decoder BERT-like model.
**Description**
- New arguments `head_mask` and`decoder_head_mask` are passed to all the BART-based models `...Model`, `...ForConditionalGeneration` and `...ForQuestionAnswering` after four arguments `input_ids, attention_mask, decoder_input_ids, decoder_attention_mask` so that a testing and whole pipeline remains smooth.
- This PR also contains updated `test_headmasking`, which currently works fine with one problem - BART-based models do not satisfy a condition:
```
self.assertNotEqual(attentions[1][..., 0, :, :].flatten().sum().item(), 0.0).
```
Fixing this problem is currently underway.
**Reviewer:** @patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9569/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9569/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9569",
"html_url": "https://github.com/huggingface/transformers/pull/9569",
"diff_url": "https://github.com/huggingface/transformers/pull/9569.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9569.patch",
"merged_at": 1610973322000
} |
https://api.github.com/repos/huggingface/transformers/issues/9568 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9568/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9568/comments | https://api.github.com/repos/huggingface/transformers/issues/9568/events | https://github.com/huggingface/transformers/issues/9568 | 785,245,164 | MDU6SXNzdWU3ODUyNDUxNjQ= | 9,568 | pegasus fine-tune: TypeError: shift_tokens_right() missing 1 required positional argument: 'decoder_start_token_id' | {
"login": "cheop-byeon",
"id": 55306172,
"node_id": "MDQ6VXNlcjU1MzA2MTcy",
"avatar_url": "https://avatars.githubusercontent.com/u/55306172?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cheop-byeon",
"html_url": "https://github.com/cheop-byeon",
"followers_url": "https://api.github.com/users/cheop-byeon/followers",
"following_url": "https://api.github.com/users/cheop-byeon/following{/other_user}",
"gists_url": "https://api.github.com/users/cheop-byeon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cheop-byeon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cheop-byeon/subscriptions",
"organizations_url": "https://api.github.com/users/cheop-byeon/orgs",
"repos_url": "https://api.github.com/users/cheop-byeon/repos",
"events_url": "https://api.github.com/users/cheop-byeon/events{/privacy}",
"received_events_url": "https://api.github.com/users/cheop-byeon/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @cheop-byeon, \r\n\r\nwe no longer actively maintain the `research_projects` folder ourselves. To solve your problem you can however just add \r\n`model.config.decoder_start_token_id` as the third argument to the function.",
"Note that we recommend that you use the research project with it's proposed version being `pip install transformers==4.1.0`. We won't do actively maintain the code in `research_projects` anymore."
] | 1,610 | 1,611 | 1,610 | NONE | null | ## Environment info
- `transformers` version: 4.2.0dev0
- Platform: Linux-3.10.0-1062.18.1.el7.x86_64-x86_64-with-centos-7.7.1908-Core
- Python version: 3.7.3
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): 2.4.0 (False)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
@mfuntowicz
@sgugger
@patrickvonplaten
## Information
I am trying to do the fine-fune of pegasus in the summarization task of xsum dataset according to the instructions here.
## To reproduce
add more paramters in finetune_pegasus_xsum.sh
```
python finetune.py \
--gpus 0 \
--learning_rate=1e-4 \
--do_train \
--do_predict \
--n_val 1000 \
--val_check_interval 0.25 \
--max_source_length 512 --max_target_length 56 \
--freeze_embeds --label_smoothing 0.1 --adafactor --task summarization_xsum \
--model_name_or_path google/pegasus-xsum \
--output_dir=xsum_results \
--data_dir xsum \
--tokenizer_name google/pegasus-large \
"$@"
```
in the terminal:
```
(env) (base) [cheop.byeon@node01 seq2seq-distillation]$ sh finetune_pegasus_xsum.sh
/home/cheop.byeon/env/lib/python3.7/site-packages/pytorch_lightning/utilities/distributed.py:45: UserWarning: Checkpoint directory xsum_results exists and is not empty. With save_top_k=1, all files in this directory will be deleted when a checkpoint is saved!
warnings.warn(*args, **kwargs)
/home/cheop.byeon/env/lib/python3.7/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: The NVIDIA driver on your system is too old (found version 10000). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver. (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
Validation sanity check: 0it [00:00, ?it/s]Traceback (most recent call last):
File "finetune.py", line 442, in <module>
main(args)
File "finetune.py", line 417, in main
logger=logger,
File "/home/cheop.byeon/transformers/examples/research_projects/seq2seq-distillation/lightning_base.py", line 389, in generic_train
trainer.fit(model)
File "/home/cheop.byeon/env/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 440, in fit
results = self.accelerator_backend.train()
File "/home/cheop.byeon/env/lib/python3.7/site-packages/pytorch_lightning/accelerators/cpu_accelerator.py", line 48, in train
results = self.train_or_test()
File "/home/cheop.byeon/env/lib/python3.7/site-packages/pytorch_lightning/accelerators/accelerator.py", line 68, in train_or_test
results = self.trainer.train()
File "/home/cheop.byeon/env/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 462, in train
self.run_sanity_check(self.get_model())
File "/home/cheop.byeon/env/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 650, in run_sanity_check
_, eval_results = self.run_evaluation(test_mode=False, max_batches=self.num_sanity_val_batches)
File "/home/cheop.byeon/env/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 570, in run_evaluation
output = self.evaluation_loop.evaluation_step(test_mode, batch, batch_idx, dataloader_idx)
File "/home/cheop.byeon/env/lib/python3.7/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 171, in evaluation_step
output = self.trainer.accelerator_backend.validation_step(args)
File "/home/cheop.byeon/env/lib/python3.7/site-packages/pytorch_lightning/accelerators/cpu_accelerator.py", line 64, in validation_step
output = self.trainer.model.validation_step(*args)
File "finetune.py", line 182, in validation_step
return self._generative_step(batch)
File "finetune.py", line 226, in _generative_step
loss_tensors = self._step(batch)
File "finetune.py", line 145, in _step
decoder_input_ids = shift_tokens_right(tgt_ids, pad_token_id)
TypeError: shift_tokens_right() missing 1 required positional argument: 'decoder_start_token_id'
```
## Expected behavior
To make the finetune works in my environment.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9568/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9568/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9567 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9567/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9567/comments | https://api.github.com/repos/huggingface/transformers/issues/9567/events | https://github.com/huggingface/transformers/pull/9567 | 785,201,630 | MDExOlB1bGxSZXF1ZXN0NTU0MjY3MTk2 | 9,567 | Switch metrics in run_ner to datasets | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,610 | 1,610 | 1,610 | COLLABORATOR | null | # What does this PR do?
This PR uses `datasets` to compute the metrics in the `run_ner` script. This allows us to grab the entity level metrics on top of the overall ones if we want them, which is controlled by the newly added flag `--return_entity_level_metrics`.
Fixes #9546
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9567/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9567/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9567",
"html_url": "https://github.com/huggingface/transformers/pull/9567",
"diff_url": "https://github.com/huggingface/transformers/pull/9567.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9567.patch",
"merged_at": 1610613427000
} |
https://api.github.com/repos/huggingface/transformers/issues/9566 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9566/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9566/comments | https://api.github.com/repos/huggingface/transformers/issues/9566/events | https://github.com/huggingface/transformers/pull/9566 | 785,156,938 | MDExOlB1bGxSZXF1ZXN0NTU0MjI5ODY0 | 9,566 | Fix data parallelism in Trainer | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,610 | 1,610 | 1,610 | COLLABORATOR | null | # What does this PR do?
A bug in data parallelism was introduced in #9451 (mostly because of some weird behavior of dataclasses in python) and data was... well not parallelized anymore (more like the batch size ended up divided by the number of GPUs).
This PR fixes that and to make sure it didn't break the behavior introduced in #9451 for model parallelism, adds a multiGPU test (passing locally) to ensure data is not parallelized when the model is parallel.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9566/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9566/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9566",
"html_url": "https://github.com/huggingface/transformers/pull/9566",
"diff_url": "https://github.com/huggingface/transformers/pull/9566.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9566.patch",
"merged_at": 1610549682000
} |
https://api.github.com/repos/huggingface/transformers/issues/9565 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9565/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9565/comments | https://api.github.com/repos/huggingface/transformers/issues/9565/events | https://github.com/huggingface/transformers/pull/9565 | 785,083,678 | MDExOlB1bGxSZXF1ZXN0NTU0MTY4MDY5 | 9,565 | Make logs TF compliant | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Okey for me"
] | 1,610 | 1,610 | 1,610 | CONTRIBUTOR | null | # What does this PR do?
Currently when TensorFlow model is run in graph mode, the logs are displayed as many time the method is called even if the confition is not respected. This is because in graph mode the logs are not compiled in the graph and then are displayed all the time. To fix this, we use now `tf.print` that compiles the message inside the graph and will be displayed only when the conditions are respected.
## Fixes issue
#9285 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9565/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9565/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9565",
"html_url": "https://github.com/huggingface/transformers/pull/9565",
"diff_url": "https://github.com/huggingface/transformers/pull/9565.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9565.patch",
"merged_at": 1610618214000
} |
https://api.github.com/repos/huggingface/transformers/issues/9564 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9564/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9564/comments | https://api.github.com/repos/huggingface/transformers/issues/9564/events | https://github.com/huggingface/transformers/pull/9564 | 785,031,408 | MDExOlB1bGxSZXF1ZXN0NTU0MTI0MDQ0 | 9,564 | Remove unused token_type_ids in MPNet | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I'm totally fine to definitely suppress this argument in once if this is prefered (I would prefer as well)",
"Not an expert neither. Maybe @LysandreJik knows better.",
"> Thanks for adapting @jplu!\r\n> \r\n> Not an expert on the tokenization part, but is the method `build_inputs_with_special_tokens` still necessary in that case? (It's in both tokenizers files.)\r\n\r\nI think it's still required as it puts *e.g.* the [sep] token correctly between two sentences. I don't think that `build_inputs_with_special_tokens` necessarily has something to do with `token_type_ids`"
] | 1,610 | 1,610 | 1,610 | CONTRIBUTOR | null | # What does this PR do?
This PR adds a warning when the argument `token_type_ids` is given a show a message saying that this argument is never used. I just supressed the internal appearance of this argument without modifying the method signatures in order to do not integrates a breaking change.
Should I update the Tokenizer to make it returns only `attention_mask`?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9564/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9564/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9564",
"html_url": "https://github.com/huggingface/transformers/pull/9564",
"diff_url": "https://github.com/huggingface/transformers/pull/9564.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9564.patch",
"merged_at": 1610715990000
} |
https://api.github.com/repos/huggingface/transformers/issues/9563 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9563/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9563/comments | https://api.github.com/repos/huggingface/transformers/issues/9563/events | https://github.com/huggingface/transformers/issues/9563 | 785,030,315 | MDU6SXNzdWU3ODUwMzAzMTU= | 9,563 | finetune_trainer.py script is not using given config file | {
"login": "marcoabrate",
"id": 43387597,
"node_id": "MDQ6VXNlcjQzMzg3NTk3",
"avatar_url": "https://avatars.githubusercontent.com/u/43387597?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marcoabrate",
"html_url": "https://github.com/marcoabrate",
"followers_url": "https://api.github.com/users/marcoabrate/followers",
"following_url": "https://api.github.com/users/marcoabrate/following{/other_user}",
"gists_url": "https://api.github.com/users/marcoabrate/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marcoabrate/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marcoabrate/subscriptions",
"organizations_url": "https://api.github.com/users/marcoabrate/orgs",
"repos_url": "https://api.github.com/users/marcoabrate/repos",
"events_url": "https://api.github.com/users/marcoabrate/events{/privacy}",
"received_events_url": "https://api.github.com/users/marcoabrate/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi @marcoabrate \r\n\r\nthe script uses the provided `config` file, the reason you see `max_length` 56 because the script replaces the generate params (max/min_len etc) in `config` using `task_specific_params` \r\n\r\nhttps://github.com/huggingface/transformers/blob/245cdb469d2a7f47316926fdbac925e0ed149332/examples/seq2seq/finetune_trainer.py#L216\r\n\r\nHere in the `config`, `min_length` is 56 in `task_specific_params` so 10 get's changed to 50. ",
"Thank you. I have managed to make it work with my configuration parameters, the problem was indeed the task specific params.\r\nIn any case, I think the `loading configuration file https://huggingface.co/sshleifer/distilbart-cnn-12-6/resolve/main/config.json from cache` message and the second config print are a bit misleading.",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,610 | 1,614 | 1,614 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.1.1 (stable)
- Platform: Google Colab
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
Trainer: @sgugger
examples/seq2seq: @patil-suraj
## Information
Model I am using (Bert, XLNet ...): Bart
The problem arises when using the official example scripts: `examples/seq2seq/finetune_trainer.py`
## Problem
When giving a local configuration file with `--config_name` the script first loads the config from the local files as expected, but then it loads a new configuration file from cache, which is not the one provided through the script's arguments:
```
2021-01-13 11:04:52.919133: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.1
01/13/2021 11:04:55 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 1, distributed training: False, 16-bits training: True
01/13/2021 11:04:55 - INFO - __main__ - Training/evaluation parameters Seq2SeqTrainingArguments(output_dir='/content/drive/My Drive/MAGMA: Summarization/fine-tuning/sshleifer?distilbart-cnn-12-6_karger_books_para_train', overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=False, model_parallel=False, evaluation_strategy=<EvaluationStrategy.STEPS: 'steps'>, prediction_loss_only=False, per_device_train_batch_size=4, per_device_eval_batch_size=4, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=3e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=10.0, max_steps=-1, warmup_steps=0, logging_dir='runs/Jan13_11-04-55_a1d3ea40f4c6', logging_first_step=False, logging_steps=10, save_steps=1000, save_total_limit=3, no_cuda=False, seed=42, fp16=True, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=1000, dataloader_num_workers=0, past_index=-1, run_name='/content/drive/My Drive/MAGMA: Summarization/fine-tuning/sshleifer?distilbart-cnn-12-6_karger_books_para_train', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model='rougeL', greater_is_better='True', ignore_data_skip=False, fp16_backend='auto', sharded_ddp=False, label_smoothing=0.1, sortish_sampler=True, predict_with_generate=True, adafactor=True, encoder_layerdrop=None, decoder_layerdrop=None, dropout=None, attention_dropout=None, lr_scheduler='linear')
[INFO|configuration_utils.py:429] 2021-01-13 11:04:55,952 >> loading configuration file /content/drive/My Drive/MAGMA: Summarization/fine-tuning/sshleifer?distilbart-cnn-12-6_config/config.json
[INFO|configuration_utils.py:467] 2021-01-13 11:04:55,953 >> Model config BartConfig {
"_num_labels": 3,
"activation_dropout": 0.0,
"activation_function": "gelu",
"add_bias_logits": false,
"add_final_layer_norm": false,
"architectures": [
"BartForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 0,
"classif_dropout": 0.0,
"classifier_dropout": 0.0,
"d_model": 1024,
"decoder_attention_heads": 16,
"decoder_ffn_dim": 4096,
"decoder_layerdrop": 0.0,
"decoder_layers": 6,
"decoder_start_token_id": 2,
"do_blenderbot_90_layernorm": false,
"dropout": 0.1,
"early_stopping": true,
"encoder_attention_heads": 16,
"encoder_ffn_dim": 4096,
"encoder_layerdrop": 0.0,
"encoder_layers": 12,
"eos_token_id": 2,
"extra_pos_embeddings": 2,
"force_bos_token_to_be_generated": true,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1",
"2": "LABEL_2"
},
"init_std": 0.02,
"is_encoder_decoder": true,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1,
"LABEL_2": 2
},
"max_length": 150,
"max_position_embeddings": 1024,
"min_length": 10,
"model_type": "bart",
"no_repeat_ngram_size": 5,
"normalize_before": false,
"normalize_embedding": true,
"num_beams": 4,
"num_hidden_layers": 12,
"output_past": true,
"pad_token_id": 1,
"prefix": " ",
"replacing_rate": 0,
"scale_embedding": false,
"static_position_embeddings": false,
"student_decoder_layers": null,
"student_encoder_layers": null,
"task_specific_params": {
"summarization": {
"early_stopping": true,
"length_penalty": 2.0,
"max_length": 142,
"min_length": 56,
"no_repeat_ngram_size": 3,
"num_beams": 4
}
},
"use_cache": true,
"vocab_size": 50264
}
01/13/2021 11:04:56 - INFO - filelock - Lock 140709138970608 acquired on /content/drive/My Drive/MAGMA: Summarization/fine-tuning/sshleifer?distilbart-cnn-12-6_karger_books_para_train/adac95cf641be69365b3dd7fe00d4114b3c7c77fb0572931db31a92d4995053b.9307b6cec4435559ec6e79d5a210a334b17706465329e138f335649d14f27e78.lock
[INFO|file_utils.py:1301] 2021-01-13 11:04:56,231 >> https://huggingface.co/sshleifer/distilbart-cnn-12-6/resolve/main/config.json not found in cache or force_download set to True, downloading to /content/drive/My Drive/MAGMA: Summarization/fine-tuning/sshleifer?distilbart-cnn-12-6_karger_books_para_train/tmprjc9mj89
Downloading: 100% 1.62k/1.62k [00:00<00:00, 1.52MB/s]
[INFO|file_utils.py:1305] 2021-01-13 11:04:56,516 >> storing https://huggingface.co/sshleifer/distilbart-cnn-12-6/resolve/main/config.json in cache at /content/drive/My Drive/MAGMA: Summarization/fine-tuning/sshleifer?distilbart-cnn-12-6_karger_books_para_train/adac95cf641be69365b3dd7fe00d4114b3c7c77fb0572931db31a92d4995053b.9307b6cec4435559ec6e79d5a210a334b17706465329e138f335649d14f27e78
[INFO|file_utils.py:1308] 2021-01-13 11:04:56,518 >> creating metadata file for /content/drive/My Drive/MAGMA: Summarization/fine-tuning/sshleifer?distilbart-cnn-12-6_karger_books_para_train/adac95cf641be69365b3dd7fe00d4114b3c7c77fb0572931db31a92d4995053b.9307b6cec4435559ec6e79d5a210a334b17706465329e138f335649d14f27e78
01/13/2021 11:04:56 - INFO - filelock - Lock 140709138970608 released on /content/drive/My Drive/MAGMA: Summarization/fine-tuning/sshleifer?distilbart-cnn-12-6_karger_books_para_train/adac95cf641be69365b3dd7fe00d4114b3c7c77fb0572931db31a92d4995053b.9307b6cec4435559ec6e79d5a210a334b17706465329e138f335649d14f27e78.lock
[INFO|configuration_utils.py:431] 2021-01-13 11:04:56,522 >> loading configuration file https://huggingface.co/sshleifer/distilbart-cnn-12-6/resolve/main/config.json from cache at /content/drive/My Drive/MAGMA: Summarization/fine-tuning/sshleifer?distilbart-cnn-12-6_karger_books_para_train/adac95cf641be69365b3dd7fe00d4114b3c7c77fb0572931db31a92d4995053b.9307b6cec4435559ec6e79d5a210a334b17706465329e138f335649d14f27e78
[INFO|configuration_utils.py:467] 2021-01-13 11:04:56,523 >> Model config BartConfig {
"_num_labels": 3,
"activation_dropout": 0.0,
"activation_function": "gelu",
"add_bias_logits": false,
"add_final_layer_norm": false,
"architectures": [
"BartForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 0,
"classif_dropout": 0.0,
"classifier_dropout": 0.0,
"d_model": 1024,
"decoder_attention_heads": 16,
"decoder_ffn_dim": 4096,
"decoder_layerdrop": 0.0,
"decoder_layers": 6,
"decoder_start_token_id": 2,
"do_blenderbot_90_layernorm": false,
"dropout": 0.1,
"early_stopping": true,
"encoder_attention_heads": 16,
"encoder_ffn_dim": 4096,
"encoder_layerdrop": 0.0,
"encoder_layers": 12,
"eos_token_id": 2,
"extra_pos_embeddings": 2,
"force_bos_token_to_be_generated": true,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1",
"2": "LABEL_2"
},
"init_std": 0.02,
"is_encoder_decoder": true,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1,
"LABEL_2": 2
},
"length_penalty": 2.0,
"max_length": 142,
"max_position_embeddings": 1024,
"min_length": 56,
"model_type": "bart",
"no_repeat_ngram_size": 3,
"normalize_before": false,
"normalize_embedding": true,
"num_beams": 4,
"num_hidden_layers": 12,
"output_past": true,
"pad_token_id": 1,
"prefix": " ",
"replacing_rate": 0,
"scale_embedding": false,
"static_position_embeddings": false,
"student_decoder_layers": null,
"student_encoder_layers": null,
"task_specific_params": {
"summarization": {
"early_stopping": true,
"length_penalty": 2.0,
"max_length": 142,
"min_length": 56,
"no_repeat_ngram_size": 3,
"num_beams": 4
}
},
"use_cache": true,
"vocab_size": 50264
}
```
You can see for example that the `min_length` parameter is different in the second output, which is the default one and not the one provided by me. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9563/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9563/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9562 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9562/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9562/comments | https://api.github.com/repos/huggingface/transformers/issues/9562/events | https://github.com/huggingface/transformers/pull/9562 | 785,013,709 | MDExOlB1bGxSZXF1ZXN0NTU0MTA5MDMx | 9,562 | Fix barthez tokenizer | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,610 | 1,610 | 1,610 | MEMBER | null | The barthez tokenizer should be put in the "no config tokenizer", as two tokenizers with the same configs can't be put together.
Running the [following code](https://github.com/huggingface/transformers/issues/9422#issuecomment-759327863) works now:
```py
from transformers import AutoTokenizer
barthez_tokenizer = AutoTokenizer.from_pretrained("moussaKam/barthez")
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9562/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9562/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9562",
"html_url": "https://github.com/huggingface/transformers/pull/9562",
"diff_url": "https://github.com/huggingface/transformers/pull/9562.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9562.patch",
"merged_at": 1610537051000
} |
https://api.github.com/repos/huggingface/transformers/issues/9561 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9561/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9561/comments | https://api.github.com/repos/huggingface/transformers/issues/9561/events | https://github.com/huggingface/transformers/pull/9561 | 784,987,458 | MDExOlB1bGxSZXF1ZXN0NTU0MDg3MDE3 | 9,561 | Fix slow tests v4.2.0 | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,610 | 1,610 | 1,610 | MEMBER | null | Fixes a bunch of slow tests that were failing. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9561/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9561/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9561",
"html_url": "https://github.com/huggingface/transformers/pull/9561",
"diff_url": "https://github.com/huggingface/transformers/pull/9561.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9561.patch",
"merged_at": 1610549749000
} |
https://api.github.com/repos/huggingface/transformers/issues/9560 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9560/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9560/comments | https://api.github.com/repos/huggingface/transformers/issues/9560/events | https://github.com/huggingface/transformers/issues/9560 | 784,914,527 | MDU6SXNzdWU3ODQ5MTQ1Mjc= | 9,560 | Adding Megatron models. | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Since DeepSpeed both integrates and uses Megatron-LM almost everywhere in its tutorials it most likely should just work. Of course, the devil is in the detail.\r\n\r\nAs I haven't had a chance to study/work with GPT2 yet, I will let others comment on the more important part of your query.",
"Any plans of adding MegatronT5? (https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/t5_model.py)",
"As this is a really old thread, perhaps make a request in a new Issue, @jordiae?\r\n\r\nAnd of course, if you're interested you're more than welcome to try and add it yourself. This is of course only an invitation.",
"> As this is a really old thread, perhaps make a request in a new Issue, @jordiae?\r\n> \r\n> And of course, if you're interested you're more than welcome to try and add it yourself. This is of course only an invitation.\r\n\r\nGot it! Posted here because the issue was open. Thanks.",
"Will close this issue as it's really kind of outdated."
] | 1,610 | 1,658 | 1,658 | CONTRIBUTOR | null | # 🌟 New model addition
Is it feasible to add Megatron models ? It seems the architecture is really just a GPT2, most of the work should be in creating the config, fusing layers from the available weights here: https://github.com/pytorch/fairseq/tree/master/examples/megatron_11b and making them available.
There are Nvidia's megatron (Bert and Gpt variants) and Facebook-11b megatron (gpt variant)
If we stick to that then we can't run the model on a single GPU, so we should probably make sure this is compatible with:
- https://github.com/huggingface/transformers/pull/9208
- https://github.com/huggingface/transformers/pull/9211
**Is keeping the current GPT2 architecture and using deepspeed's ZeRo and other parallelism schemes without touching original implementation feasible?**
## Model description
https://github.com/pytorch/fairseq/blob/e3c4282551e819853952284681e9ed60398c5c4a/examples/megatron_11b/README.md
<!-- Important information -->
## Open source status
* [x] the model implementation is available: https://github.com/ngoyal2707/Megatron-LM/blob/adb23324c222aad0aad89308e70302d996a5eaeb/mpu/transformer.py (Most of the work seems to be on Matrix parallelization)
* [x] the model weights are available: https://dl.fbaipublicfiles.com/fairseq/models/model_parallel/megatron_11b.tar.gz (Megatron 11b), https://github.com/NVIDIA/Megatron-LM#downloading-checkpoints (Nvidia's version, 3b and 8.3b don't seem to be available)
* [x] who are the authors: (mention them, if possible by @gh-username) Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, Bryan Catanzaro https://arxiv.org/abs/1909.08053
https://developer.nvidia.com/blog/language-modeling-using-megatron-a100-gpu/
@stas00 @patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9560/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9560/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9559 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9559/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9559/comments | https://api.github.com/repos/huggingface/transformers/issues/9559/events | https://github.com/huggingface/transformers/issues/9559 | 784,910,865 | MDU6SXNzdWU3ODQ5MTA4NjU= | 9,559 | tokenizer decode method | {
"login": "Zessay",
"id": 39905704,
"node_id": "MDQ6VXNlcjM5OTA1NzA0",
"avatar_url": "https://avatars.githubusercontent.com/u/39905704?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Zessay",
"html_url": "https://github.com/Zessay",
"followers_url": "https://api.github.com/users/Zessay/followers",
"following_url": "https://api.github.com/users/Zessay/following{/other_user}",
"gists_url": "https://api.github.com/users/Zessay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Zessay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Zessay/subscriptions",
"organizations_url": "https://api.github.com/users/Zessay/orgs",
"repos_url": "https://api.github.com/users/Zessay/repos",
"events_url": "https://api.github.com/users/Zessay/events{/privacy}",
"received_events_url": "https://api.github.com/users/Zessay/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,610 | 1,614 | 1,614 | NONE | null | <!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.1.1
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.7.9
- PyTorch version (GPU?): 1.7.1+cu101 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
ray/raytune: @richardliaw @amogkam
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@mfuntowicz
## Information
When I learn the source code of the tokenizer method `decode`, I found a problem in the **line 719 of the file `tokenization_utils.py`**. In my view, the type of the variable `token` is `str`, while the type of the property `all_special_ids` is `List[int]`. Although this problem will not raise error, even has no effect to the `decode` method, I still think this is a case which is need to be fixed for us to understand.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I think it is required to replace `self.all_special_ids` with `self.all_special_tokens` in the **line 719 of the file `tokenization_utils.py`**. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9559/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9559/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9558 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9558/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9558/comments | https://api.github.com/repos/huggingface/transformers/issues/9558/events | https://github.com/huggingface/transformers/issues/9558 | 784,875,039 | MDU6SXNzdWU3ODQ4NzUwMzk= | 9,558 | SMITH Google | {
"login": "miketrimmel",
"id": 21080191,
"node_id": "MDQ6VXNlcjIxMDgwMTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/21080191?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/miketrimmel",
"html_url": "https://github.com/miketrimmel",
"followers_url": "https://api.github.com/users/miketrimmel/followers",
"following_url": "https://api.github.com/users/miketrimmel/following{/other_user}",
"gists_url": "https://api.github.com/users/miketrimmel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/miketrimmel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/miketrimmel/subscriptions",
"organizations_url": "https://api.github.com/users/miketrimmel/orgs",
"repos_url": "https://api.github.com/users/miketrimmel/repos",
"events_url": "https://api.github.com/users/miketrimmel/events{/privacy}",
"received_events_url": "https://api.github.com/users/miketrimmel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"This is a duplicate. See #9526 ",
"Oh ok, thanks. "
] | 1,610 | 1,610 | 1,610 | NONE | null | # 🌟 New model addition
## Google´s SMITH Algorithm
## https://github.com/google-research/google-research/tree/master/smith
* [x] the model implementation is available: (give details)
* [ ] the model weights are available: (give details)
* [ ] who are the authors: (mention them, if possible by @gh-username)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9558/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9558/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9557 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9557/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9557/comments | https://api.github.com/repos/huggingface/transformers/issues/9557/events | https://github.com/huggingface/transformers/pull/9557 | 784,860,429 | MDExOlB1bGxSZXF1ZXN0NTUzOTgxODAz | 9,557 | Speed up TopKLogitsWarper and TopPLogitsWarper (pytorch) | {
"login": "LSinev",
"id": 12072891,
"node_id": "MDQ6VXNlcjEyMDcyODkx",
"avatar_url": "https://avatars.githubusercontent.com/u/12072891?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LSinev",
"html_url": "https://github.com/LSinev",
"followers_url": "https://api.github.com/users/LSinev/followers",
"following_url": "https://api.github.com/users/LSinev/following{/other_user}",
"gists_url": "https://api.github.com/users/LSinev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LSinev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LSinev/subscriptions",
"organizations_url": "https://api.github.com/users/LSinev/orgs",
"repos_url": "https://api.github.com/users/LSinev/repos",
"events_url": "https://api.github.com/users/LSinev/events{/privacy}",
"received_events_url": "https://api.github.com/users/LSinev/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This looks great @LSinev! ",
"Ok, looks good to merge. I checked that your implementation works with Pytorch 1.4 as well"
] | 1,610 | 1,619 | 1,610 | CONTRIBUTOR | null | # What does this PR do?
Speeds up TopKLogitsWarper and TopPLogitsWarper using torch filling functions.
Here's a minimal example to reproduce the slow behavior (and test speed of improvements):
```
import torch
from transformers import TopPLogitsWarper, TopKLogitsWarper, LogitsWarper
import timeit
class TopKLogitsWarperNew(LogitsWarper):
r"""
:class:`transformers.LogitsWarper` that performs top-k, i.e. restricting to the k highest probability elements.
Args:
top_k (:obj:`int`):
The number of highest probability vocabulary tokens to keep for top-k-filtering.
filter_value (:obj:`float`, `optional`, defaults to :obj:`-float("Inf")`):
All filtered values will be set to this float value.
min_tokens_to_keep (:obj:`int`, `optional`, defaults to 1):
Minimum number of tokens that cannot be filtered.
"""
def __init__(self, top_k: int, filter_value: float = -float("Inf"), min_tokens_to_keep: int = 1):
if not isinstance(top_k, int) or top_k <= 0:
raise ValueError(f"`top_k` has to be a strictly positive integer, but is {top_k}")
self.top_k = top_k
self.filter_value = filter_value
self.min_tokens_to_keep = min_tokens_to_keep
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor:
top_k = min(max(self.top_k, self.min_tokens_to_keep), scores.size(-1)) # Safety check
# Remove all tokens with a probability less than the last token of the top-k
indices_to_remove = scores < torch.topk(scores, top_k)[0][..., -1, None]
scores = scores.masked_fill(indices_to_remove, self.filter_value) # changed here
return scores
class TopPLogitsWarperNew(LogitsWarper):
"""
:class:`transformers.LogitsWarper` that performs top-p, i.e. restricting to top tokens summing to prob_cut_off <=
prob_cut_off.
Args:
top_p (:obj:`float`):
If set to < 1, only the most probable tokens with probabilities that add up to :obj:`top_p` or higher are
kept for generation.
filter_value (:obj:`float`, `optional`, defaults to :obj:`-float("Inf")`):
All filtered values will be set to this float value.
min_tokens_to_keep (:obj:`int`, `optional`, defaults to 1):
Minimum number of tokens that cannot be filtered.
"""
def __init__(self, top_p: float, filter_value: float = -float("Inf"), min_tokens_to_keep: int = 1):
if not isinstance(top_p, float) or (top_p < 0 or top_p > 1.0):
raise ValueError(f"`top_p` has to be a float > 0 and < 1, but is {top_p}")
self.top_p = top_p
self.filter_value = filter_value
self.min_tokens_to_keep = min_tokens_to_keep
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor:
sorted_logits, sorted_indices = torch.sort(scores, descending=True)
cumulative_probs = sorted_logits.softmax(dim=-1).cumsum(dim=-1) # changed here
# Remove tokens with cumulative top_p above the threshold (token with 0 are kept)
sorted_indices_to_remove = cumulative_probs > self.top_p
if self.min_tokens_to_keep > 1:
# Keep at least min_tokens_to_keep (set to min_tokens_to_keep-1 because we add the first one below)
sorted_indices_to_remove[..., : self.min_tokens_to_keep - 1] = 0
# Shift the indices to the right to keep also the first token above the threshold
sorted_indices_to_remove[..., 1:] = sorted_indices_to_remove[..., :-1].clone()
sorted_indices_to_remove[..., 0] = 0
# scatter sorted tensors to original indexing
indices_to_remove = sorted_indices_to_remove.scatter(1, sorted_indices, sorted_indices_to_remove)
scores = scores.masked_fill(indices_to_remove, self.filter_value) # changed here
return scores
input_ids = torch.randint(0, 10000, (256, 256))
scores = torch.randn(256, 10000)
top_k_lw = TopKLogitsWarper(100)
top_p_lw = TopPLogitsWarper(0.95)
top_k_lw_new = TopKLogitsWarperNew(100)
top_p_lw_new = TopPLogitsWarperNew(0.95)
print(f"Existing top_k impl time for 100 iterations on CPU = {timeit.timeit(lambda: top_k_lw(input_ids, scores), number=100)}")
print(f"Proposed top_k impl time for 100 iterations on CPU = {timeit.timeit(lambda: top_k_lw_new(input_ids, scores), number=100)}")
print(f"Existing top_p impl time for 100 iterations on CPU = {timeit.timeit(lambda: top_p_lw(input_ids, scores), number=100)}")
print(f"Proposed top_p impl time for 100 iterations on CPU = {timeit.timeit(lambda: top_p_lw_new(input_ids, scores), number=100)}")
if torch.cuda.is_available():
input_ids = input_ids.cuda()
scores = scores.cuda()
print(f"Existing top_k impl time for 100 iterations on GPU = {timeit.timeit(lambda: top_k_lw(input_ids, scores), number=100)}")
print(f"Proposed top_k impl time for 100 iterations on GPU = {timeit.timeit(lambda: top_k_lw_new(input_ids, scores), number=100)}")
print(f"Existing top_p impl time for 100 iterations on GPU = {timeit.timeit(lambda: top_p_lw(input_ids, scores), number=100)}")
print(f"Proposed top_p impl time for 100 iterations on GPU = {timeit.timeit(lambda: top_p_lw_new(input_ids, scores), number=100)}")
```
Timings reported:
```
Existing top_k impl time for 100 iterations on CPU = 2.5527561419994527
Proposed top_k impl time for 100 iterations on CPU = 0.36601612999947974
Existing top_p impl time for 100 iterations on CPU = 6.4072540179995485
Proposed top_p impl time for 100 iterations on CPU = 4.1470332960007
Existing top_k impl time for 100 iterations on GPU = 0.09082965299967327
Proposed top_k impl time for 100 iterations on GPU = 0.008193381999262783
Existing top_p impl time for 100 iterations on GPU = 1.1027910299999348
Proposed top_p impl time for 100 iterations on GPU = 0.9008321309993335
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9557/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9557/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9557",
"html_url": "https://github.com/huggingface/transformers/pull/9557",
"diff_url": "https://github.com/huggingface/transformers/pull/9557.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9557.patch",
"merged_at": 1610542068000
} |
https://api.github.com/repos/huggingface/transformers/issues/9556 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9556/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9556/comments | https://api.github.com/repos/huggingface/transformers/issues/9556/events | https://github.com/huggingface/transformers/issues/9556 | 784,741,946 | MDU6SXNzdWU3ODQ3NDE5NDY= | 9,556 | Where is convert_bert_original_tf_checkpoint_to_pytorch.py? | {
"login": "sednaasil",
"id": 46356860,
"node_id": "MDQ6VXNlcjQ2MzU2ODYw",
"avatar_url": "https://avatars.githubusercontent.com/u/46356860?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sednaasil",
"html_url": "https://github.com/sednaasil",
"followers_url": "https://api.github.com/users/sednaasil/followers",
"following_url": "https://api.github.com/users/sednaasil/following{/other_user}",
"gists_url": "https://api.github.com/users/sednaasil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sednaasil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sednaasil/subscriptions",
"organizations_url": "https://api.github.com/users/sednaasil/orgs",
"repos_url": "https://api.github.com/users/sednaasil/repos",
"events_url": "https://api.github.com/users/sednaasil/events{/privacy}",
"received_events_url": "https://api.github.com/users/sednaasil/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1897896961,
"node_id": "MDU6TGFiZWwxODk3ODk2OTYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Migration",
"name": "Migration",
"color": "e99695",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Its current location is [here](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/convert_bert_original_tf_checkpoint_to_pytorch.py).",
"Hi @sednaasil, what are you trying to do? Could you show the code that you're using so that we may help you debug it? Thanks.",
"Hi! \r\n\r\nI downloaded the uncased_L-12_H-768_A-12 BERT model to create an entity extraction tool following this [method](https://github.com/abhishekkrthakur/bert-entity-extraction). The model I downloaded did not include 'pytorch_model.bin', 'tf_model.h5', 'model.ckpt.index' files, resulting in an error implementing my model. I saw a previous post where the files I needed were generated with the convert_bert_original_tf_checkpoint_to_pytorch.py file; however the link to the file was broken. Is this the correct way to proceed?\r\n\r\n",
"Where did you download your model from? Is something preventing you from using `bert-base-cased`?\r\n\r\n```py\r\nfrom transformers import BertModel\r\n\r\nmodel = BertModel.from_pretrained(\"bert-base-cased\")\r\n```",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,610 | 1,614 | 1,614 | NONE | null | HI:
I am getting the following error when implementing entity extraction in BERT. OSError: Error no file named ['pytorch_model.bin', 'tf_model.h5', 'model.ckpt.index']
I am very new to using BERT, and noted that [issue 2110](https://github.com/huggingface/transformers/issues/2110) had a similar issue. Issue 2110 was referred to the convert_bert_original_tf_checkpoint_to_pytorch.py file. However, the current link isn't working. Could you point me to its current location?
V/r,
L | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9556/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9556/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9555 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9555/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9555/comments | https://api.github.com/repos/huggingface/transformers/issues/9555/events | https://github.com/huggingface/transformers/issues/9555 | 784,741,233 | MDU6SXNzdWU3ODQ3NDEyMzM= | 9,555 | DPRReaderTokenizer does not generate the attention_mask properly | {
"login": "mkserge",
"id": 2992022,
"node_id": "MDQ6VXNlcjI5OTIwMjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2992022?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mkserge",
"html_url": "https://github.com/mkserge",
"followers_url": "https://api.github.com/users/mkserge/followers",
"following_url": "https://api.github.com/users/mkserge/following{/other_user}",
"gists_url": "https://api.github.com/users/mkserge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mkserge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mkserge/subscriptions",
"organizations_url": "https://api.github.com/users/mkserge/orgs",
"repos_url": "https://api.github.com/users/mkserge/repos",
"events_url": "https://api.github.com/users/mkserge/events{/privacy}",
"received_events_url": "https://api.github.com/users/mkserge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Indeed, it doesn't! We would gladly welcome a PR!",
"Closed by #9663 :)"
] | 1,610 | 1,611 | 1,611 | CONTRIBUTOR | null | Hello,
It seems like the DPRReaderTokenizer does not generate the `attention_mask` properly.
Steps to reproduce on the master branch
```bash
(venv) sergey_mkrtchyan test (master) $ python
Python 3.8.6 (v3.8.6:db455296be, Sep 23 2020, 13:31:39)
[Clang 6.0 (clang-600.0.57)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from transformers import DPRReaderTokenizer, DPRReader
>>> tokenizer = DPRReaderTokenizer.from_pretrained('facebook/dpr-reader-single-nq-base')
>>> model = DPRReader.from_pretrained('facebook/dpr-reader-single-nq-base')
>>> encoded_inputs = tokenizer(questions="What is love ?",
... titles="Haddaway",
... texts="What Is Love is a song recorded by the artist Haddaway",
... padding=True,
... return_tensors='pt')
>>> encoded_inputs
{'input_ids': tensor([[ 101, 2054, 2003, 2293, 1029, 102, 2018, 2850, 4576, 102, 2054, 2003,
2293, 2003, 1037, 2299, 2680, 2011, 1996, 3063, 2018, 2850, 4576]]), 'attention_mask': tensor([True])}
```
Notice the `attention_mask` above is incorrect. It should have the same shape as the `input_ids` tensor.
## Environment info
- `transformers` version: 4.2.0dev0
- Platform: macOS-10.15.7-x86_64-i386-64bit
- Python version: 3.8.6
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
Git blame says @lhoestq and @LysandreJik might be able to help :)
I believe the issue is in this part of the code
https://github.com/huggingface/transformers/blob/5f6721032af46cf491fe69c010805f8786bf63a1/src/transformers/models/dpr/tokenization_dpr.py#L254
(same thing for the fast tokenizer)
I fixed it locally by replacing the above line with
```Python
attention_mask = []
for input_ids in encoded_inputs["input_ids"]:
attention_mask.append([int(input_id != self.pad_token_id) for input_id in input_ids])
```
I am happy to submit a PR if that looks reasonable to you.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9555/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9555/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9554 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9554/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9554/comments | https://api.github.com/repos/huggingface/transformers/issues/9554/events | https://github.com/huggingface/transformers/pull/9554 | 784,710,487 | MDExOlB1bGxSZXF1ZXN0NTUzODU4MTEw | 9,554 | Fix classification script: enable dynamic padding with truncation | {
"login": "pashok3d",
"id": 35535358,
"node_id": "MDQ6VXNlcjM1NTM1MzU4",
"avatar_url": "https://avatars.githubusercontent.com/u/35535358?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pashok3d",
"html_url": "https://github.com/pashok3d",
"followers_url": "https://api.github.com/users/pashok3d/followers",
"following_url": "https://api.github.com/users/pashok3d/following{/other_user}",
"gists_url": "https://api.github.com/users/pashok3d/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pashok3d/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pashok3d/subscriptions",
"organizations_url": "https://api.github.com/users/pashok3d/orgs",
"repos_url": "https://api.github.com/users/pashok3d/repos",
"events_url": "https://api.github.com/users/pashok3d/events{/privacy}",
"received_events_url": "https://api.github.com/users/pashok3d/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,610 | 1,610 | 1,610 | CONTRIBUTOR | null | # What does this PR do?
To fix the issue (below) in the run_glue.pl script, tokenizer's `max_length` value is now assigned directly with `max_seq_length` argument. Now it is possible to truncate the sequence and use dynamic padding. By default `max_length` is 128, which means truncation to 128 tokens. To disable truncation, set `max_length = None`.
Fixes #9551
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? https://github.com/huggingface/transformers/issues/9551#issue-784679542
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9554/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9554/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9554",
"html_url": "https://github.com/huggingface/transformers/pull/9554",
"diff_url": "https://github.com/huggingface/transformers/pull/9554.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9554.patch",
"merged_at": 1610542008000
} |
https://api.github.com/repos/huggingface/transformers/issues/9553 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9553/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9553/comments | https://api.github.com/repos/huggingface/transformers/issues/9553/events | https://github.com/huggingface/transformers/pull/9553 | 784,690,416 | MDExOlB1bGxSZXF1ZXN0NTUzODQxODQw | 9,553 | [setup.py] note on how to get to transformers exact dependencies from shell | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,610 | 1,610 | 1,610 | CONTRIBUTOR | null | As a follow up to #9550, this PR adds a few handy one liners to quickly access the correct dependency versions from shell.
e.g if you want to install deps for a group of packages we control and with their correct versions, you just need to run:
```
pip install -U $(python -c 'import sys; from transformers.dependency_versions_table import deps; \
print(" ".join([deps[x] for x in sys.argv[1:]]))' numpy filelock protobuf requests tqdm regex \
sentencepiece sacremoses tokenizers packaging importlib_metadata)
```
that was one option for torchhub, but since it didn't have `transformers` installed it didn't work and a different solution was provided https://github.com/huggingface/transformers/pull/9552
@sgugger, @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9553/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9553/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9553",
"html_url": "https://github.com/huggingface/transformers/pull/9553",
"diff_url": "https://github.com/huggingface/transformers/pull/9553.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9553.patch",
"merged_at": 1610618649000
} |
https://api.github.com/repos/huggingface/transformers/issues/9552 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9552/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9552/comments | https://api.github.com/repos/huggingface/transformers/issues/9552/events | https://github.com/huggingface/transformers/pull/9552 | 784,685,711 | MDExOlB1bGxSZXF1ZXN0NTUzODM3ODcw | 9,552 | [CI] use correct deps for torchhub | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Is there a way to leverage this to also update the dependencies in [`hubconf.py`](https://github.com/huggingface/transformers/blob/master/hubconf.py)?",
"From what I understand, you need to install the dependencies by hand before, so what would it add to have this in hubconf? From what I gathered this list of \"dependencies\" is just there to be dynamically imported when executing the code to import the model, but it's not doing anything for the install.",
"> From what I understand, you need to install the dependencies by hand before, so what would it add to have this in hubconf? From what I gathered this list of \"dependencies\" is just there to be dynamically imported when executing the code to import the model, but it's not doing anything for the install.\r\n\r\nWhat Sylvain said.\r\n\r\nMoreover we validated that yesterday, since trying to add specific versions in `hubconf.py` made no difference. So that var should have been called \"imports\" to be more precise.\r\n"
] | 1,610 | 1,610 | 1,610 | CONTRIBUTOR | null | As a follow up to https://github.com/huggingface/transformers/pull/9550, here is a clean solution that requires only one source (`setup.py`) to edit for dependencies and groups of thereof.
The PR
1. defines a new dependency group for `torchhub` in `setup.py`
2. installs the exact dependencies of that group inside .github workflow
3. uninstalls `transformers` since Sylvain said it shouldn't be there, but it had to be installed to get the deps easily.
Of course, it'll need to be merged first for:
```
pip install -e git+https://github.com/huggingface/transformers.git#egg=transformers[torchhub]
```
to work, since it's not there now... meanwhile you can test it from this branch:
```
pip install -e git+https://github.com/stas00/transformers.git@torchhub-deps#egg=transformers[torchhub]
```
-------------
Alternatively to:
```
pip install -e git+https://github.com/huggingface/transformers.git#egg=transformers[torchhub]
pip uninstall -y transformers
```
we can do:
```
git clone https://github.com/huggingface/transformers
cd transformers
pip install -e .[torchhub]
pip uninstall -y transformers
```
for a few fractions of seconds faster.
----------------
Yet, another approach it to extend `setup.py` with what I created here a few year back:
https://github.com/fastai/fastai1/blob/a8327427ad5137c4899a1b4f74745193c9ea5be3/setup.py#L11-L22
This then:
```
python setup.py -q deps --dep-groups=torchhub
```
would dump the dependencies just for the specified extra groups, which can then be fed to `pip install`, so there will be no need to install the main package. Literally, the above command would just dump `extras["torchhub"]` in this case.
--------------
Finally, we could make `src/transformers/dependency_versions_table.py` contain the full dependency groups as well, and again then it'll just be needed to get one's hands on that file to extract groups of dependencies, e.g.:
```
wget https://raw.githubusercontent.com/huggingface/transformers/master/src/transformers/dependency_versions_table.py
python -c 'import sys; from dependency_versions_table import dep_group; print(dep_group[sys.argv[1]])' torchhub
```
this is hypothetical since we don't currently have `dep_group` dict in `dependency_versions_table.py`.
@sgugger, @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9552/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9552/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9552",
"html_url": "https://github.com/huggingface/transformers/pull/9552",
"diff_url": "https://github.com/huggingface/transformers/pull/9552.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9552.patch",
"merged_at": 1610542974000
} |
https://api.github.com/repos/huggingface/transformers/issues/9551 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9551/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9551/comments | https://api.github.com/repos/huggingface/transformers/issues/9551/events | https://github.com/huggingface/transformers/issues/9551 | 784,679,542 | MDU6SXNzdWU3ODQ2Nzk1NDI= | 9,551 | Dynamic padding + truncation in classification script | {
"login": "pashok3d",
"id": 35535358,
"node_id": "MDQ6VXNlcjM1NTM1MzU4",
"avatar_url": "https://avatars.githubusercontent.com/u/35535358?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pashok3d",
"html_url": "https://github.com/pashok3d",
"followers_url": "https://api.github.com/users/pashok3d/followers",
"following_url": "https://api.github.com/users/pashok3d/following{/other_user}",
"gists_url": "https://api.github.com/users/pashok3d/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pashok3d/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pashok3d/subscriptions",
"organizations_url": "https://api.github.com/users/pashok3d/orgs",
"repos_url": "https://api.github.com/users/pashok3d/repos",
"events_url": "https://api.github.com/users/pashok3d/events{/privacy}",
"received_events_url": "https://api.github.com/users/pashok3d/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes, the `max_length` should be passed the same way. Would like to open a PR to fix `run_glue.py`?",
"Okay, I will do that."
] | 1,610 | 1,610 | 1,610 | CONTRIBUTOR | null | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
@VictorSanh
@sgugger
It seems that it is not possible to use dynamic padding along with truncation in classification script, because when `tokenizer` gets `max_length = None` it just skips truncation.
https://github.com/huggingface/transformers/blob/063d8d27f4e1d089dc76f22e378b86b219167e3b/examples/text-classification/run_glue.py#L290
On the other hand, in language modeling script it works.
https://github.com/huggingface/transformers/blob/063d8d27f4e1d089dc76f22e378b86b219167e3b/examples/language-modeling/run_mlm.py#L311
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9551/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9551/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9550 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9550/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9550/comments | https://api.github.com/repos/huggingface/transformers/issues/9550/events | https://github.com/huggingface/transformers/pull/9550 | 784,662,863 | MDExOlB1bGxSZXF1ZXN0NTUzODE4OTY1 | 9,550 | Use the right version of tokenizers | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Test passes, so merging to get the CI green."
] | 1,610 | 1,610 | 1,610 | COLLABORATOR | null | # What does this PR do?
Pulls the version of tokenziers from our deps in the `hubconf.py` otherwise it might install a version of tokenizers that is more recent (if available on pypi). When that is the case, the check of our packages fails at import. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9550/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9550/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9550",
"html_url": "https://github.com/huggingface/transformers/pull/9550",
"diff_url": "https://github.com/huggingface/transformers/pull/9550.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9550.patch",
"merged_at": 1610495746000
} |
https://api.github.com/repos/huggingface/transformers/issues/9549 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9549/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9549/comments | https://api.github.com/repos/huggingface/transformers/issues/9549/events | https://github.com/huggingface/transformers/pull/9549 | 784,661,339 | MDExOlB1bGxSZXF1ZXN0NTUzODE3NjU5 | 9,549 | Use the right version of tokenizers for torchhub | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Branched from my last PR and not master..."
] | 1,610 | 1,610 | 1,610 | COLLABORATOR | null | # What does this PR do?
The hubconf.py is using `tokenizers` without checking the version transformers needs, which yields to an import error if a more recent version of tokenziers is available on pypi (like right now). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9549/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9549/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9549",
"html_url": "https://github.com/huggingface/transformers/pull/9549",
"diff_url": "https://github.com/huggingface/transformers/pull/9549.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9549.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9548 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9548/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9548/comments | https://api.github.com/repos/huggingface/transformers/issues/9548/events | https://github.com/huggingface/transformers/issues/9548 | 784,598,332 | MDU6SXNzdWU3ODQ1OTgzMzI= | 9,548 | Quick tour runs into OOM on Colab | {
"login": "SaschaHeyer",
"id": 1991664,
"node_id": "MDQ6VXNlcjE5OTE2NjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1991664?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SaschaHeyer",
"html_url": "https://github.com/SaschaHeyer",
"followers_url": "https://api.github.com/users/SaschaHeyer/followers",
"following_url": "https://api.github.com/users/SaschaHeyer/following{/other_user}",
"gists_url": "https://api.github.com/users/SaschaHeyer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SaschaHeyer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SaschaHeyer/subscriptions",
"organizations_url": "https://api.github.com/users/SaschaHeyer/orgs",
"repos_url": "https://api.github.com/users/SaschaHeyer/repos",
"events_url": "https://api.github.com/users/SaschaHeyer/events{/privacy}",
"received_events_url": "https://api.github.com/users/SaschaHeyer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"If you don't have any memory left, you should use a lower batch size. In the line:\r\n``` \r\n>>> tfdataset = tf.data.Dataset.from_tensor_slices((features, dataset[\"labels\"])).batch(32)\r\n```\r\nreplace 32 by something lower.\r\n\r\nAlso, please use the [forums](https://discuss.huggingface.co/) for this kind of questions."
] | 1,610 | 1,610 | 1,610 | NONE | null | ## Environment info
Google Colab
### Who can help
@jplu @LysandreJik @sgugger
## Information
Followed the quick tour using a Colab notebook https://huggingface.co/docs/datasets/quicktour.html#fine-tuning-a-deep-learning-model
Colab Runtime type: GPU
But the training process runs into OOM
```
---------------------------------------------------------------------------
ResourceExhaustedError Traceback (most recent call last)
<ipython-input-7-cb1582039f5e> in <module>()
2 opt = tf.keras.optimizers.Adam(learning_rate=3e-5)
3 model.compile(optimizer=opt, loss=loss_fn, metrics=["accuracy"])
----> 4 model.fit(tfdataset, epochs=3)
6 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
58 ctx.ensure_initialized()
59 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
---> 60 inputs, attrs, num_outputs)
61 except core._NotOkStatusException as e:
62 if name is not None:
ResourceExhaustedError: 2 root error(s) found.
(0) Resource exhausted: OOM when allocating tensor with shape[32,512,768] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node tf_bert_for_sequence_classification/bert/encoder/layer_._4/attention/output/LayerNorm/batchnorm/mul_1 (defined at /usr/local/lib/python3.6/dist-packages/transformers/models/bert/modeling_tf_bert.py:327) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[gradient_tape/tf_bert_for_sequence_classification/bert/embeddings/position_embeddings/embedding_lookup/Reshape/_532]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
(1) Resource exhausted: OOM when allocating tensor with shape[32,512,768] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node tf_bert_for_sequence_classification/bert/encoder/layer_._4/attention/output/LayerNorm/batchnorm/mul_1 (defined at /usr/local/lib/python3.6/dist-packages/transformers/models/bert/modeling_tf_bert.py:327) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
0 successful operations.
0 derived errors ignored. [Op:__inference_train_function_24759]
Function call stack:
train_function -> train_function
```
## To reproduce
Steps to reproduce the behavior:
The steps to reproduce are in the public accessible Colab notebook
[https://colab.research.google.com/drive/1Q3tBx57f2A8Hn1D7IXS-1nKandB86S3f](https://colab.research.google.com/drive/1Q3tBx57f2A8Hn1D7IXS-1nKandB86S3f)
## Expected behavior
Sample runs
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9548/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9548/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9547 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9547/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9547/comments | https://api.github.com/repos/huggingface/transformers/issues/9547/events | https://github.com/huggingface/transformers/issues/9547 | 784,589,757 | MDU6SXNzdWU3ODQ1ODk3NTc= | 9,547 | Fine-tuning LMwithNSP | {
"login": "7AM7",
"id": 24973739,
"node_id": "MDQ6VXNlcjI0OTczNzM5",
"avatar_url": "https://avatars.githubusercontent.com/u/24973739?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/7AM7",
"html_url": "https://github.com/7AM7",
"followers_url": "https://api.github.com/users/7AM7/followers",
"following_url": "https://api.github.com/users/7AM7/following{/other_user}",
"gists_url": "https://api.github.com/users/7AM7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/7AM7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/7AM7/subscriptions",
"organizations_url": "https://api.github.com/users/7AM7/orgs",
"repos_url": "https://api.github.com/users/7AM7/repos",
"events_url": "https://api.github.com/users/7AM7/events{/privacy}",
"received_events_url": "https://api.github.com/users/7AM7/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi, could you please provide everything asked in the issue template? Information relative to your environment as well as the code that triggered the error. Thanks.",
"environment transformers 4.0.0 and my code below and message error you can get it after code\r\n\r\n```py\r\nfrom __future__ import absolute_import\r\nfrom __future__ import division\r\nfrom __future__ import print_function\r\n\r\nimport os\r\nimport logging\r\nimport argparse\r\nfrom tqdm import tqdm, trange\r\n\r\nimport numpy as np\r\nimport torch\r\nfrom torch.utils.data import DataLoader, RandomSampler , SequentialSampler\r\nfrom torch.utils.data.distributed import DistributedSampler\r\n\r\n#from pytorch_pretrained_bert.tokenization import BertTokenizer\r\n#from pytorch_pretrained_bert.modeling import BertForPreTraining\r\nfrom transformers import BertTokenizer, BertForPreTraining\r\n#from pytorch_pretrained_bert.optimization import BertAdam\r\nfrom transformers import XLNetTokenizer\r\nfrom transformers import AdamW, get_linear_schedule_with_warmup\r\n#from transformers import BertForPreTraining\r\nimport sentencepiece as spm\r\n\r\nfrom torch.utils.data import Dataset\r\nimport random\r\n\r\nlogging.basicConfig(format='%(asctime)s - %(levelname)s - %(name)s - %(message)s',\r\n datefmt='%m/%d/%Y %H:%M:%S',\r\n level=logging.INFO)\r\nlogger = logging.getLogger(__name__)\r\n\r\n\r\ndef warmup_linear(x, warmup=0.002):\r\n if x < warmup:\r\n return x / warmup\r\n return 1.0 - x\r\n\r\n\r\ndef accuracy(out, labels, total_test):\r\n class_preds = out.data.cpu().numpy().argmax(axis=-1)\r\n labels = labels.data.cpu().numpy()\r\n return np.sum(class_preds == labels) / total_test\r\n\r\n\r\nclass BERTDataset(Dataset):\r\n def __init__(self, corpus_path, tokenizer, seq_len, encoding=\"utf-8\", corpus_lines=None, on_memory=True):\r\n self.vocab = tokenizer.vocab\r\n self.tokenizer = tokenizer\r\n self.seq_len = seq_len\r\n self.on_memory = on_memory\r\n self.corpus_lines = corpus_lines # number of non-empty lines in input corpus\r\n self.corpus_path = corpus_path\r\n self.encoding = encoding\r\n self.current_doc = 0 # to avoid random sentence from same doc\r\n\r\n # for loading samples directly from file\r\n self.sample_counter = 0 # used to keep track of full epochs on file\r\n self.line_buffer = None # keep second sentence of a pair in memory and use as first sentence in next pair\r\n\r\n # for loading samples in memory\r\n self.current_random_doc = 0\r\n self.num_docs = 0\r\n self.sample_to_doc = [] # map sample index to doc and line\r\n\r\n # load samples into memory\r\n if on_memory:\r\n self.all_docs = []\r\n doc = []\r\n self.corpus_lines = 0\r\n with open(corpus_path, \"r\", encoding=encoding) as f:\r\n for line in tqdm(f, desc=\"Loading Dataset\", total=corpus_lines):\r\n line = line.strip()\r\n if line == \"\":\r\n self.all_docs.append(doc)\r\n doc = []\r\n # remove last added sample because there won't be a subsequent line anymore in the doc\r\n self.sample_to_doc.pop()\r\n else:\r\n # store as one sample\r\n sample = {\"doc_id\": len(self.all_docs),\r\n \"line\": len(doc)}\r\n self.sample_to_doc.append(sample)\r\n doc.append(line)\r\n self.corpus_lines = self.corpus_lines + 1\r\n\r\n # if last row in file is not empty\r\n if self.all_docs[-1] != doc:\r\n self.all_docs.append(doc)\r\n self.sample_to_doc.pop()\r\n\r\n self.num_docs = len(self.all_docs)\r\n\r\n # load samples later lazily from disk\r\n else:\r\n if self.corpus_lines is None:\r\n with open(corpus_path, \"r\", encoding=encoding) as f:\r\n self.corpus_lines = 0\r\n for line in tqdm(f, desc=\"Loading Dataset\", total=corpus_lines):\r\n if line.strip() == \"\":\r\n self.num_docs += 1\r\n else:\r\n self.corpus_lines += 1\r\n\r\n # if doc does not end with empty line\r\n if line.strip() != \"\":\r\n self.num_docs += 1\r\n\r\n self.file = open(corpus_path, \"r\", encoding=encoding)\r\n self.random_file = open(corpus_path, \"r\", encoding=encoding)\r\n\r\n def __len__(self):\r\n # last line of doc won't be used, because there's no \"nextSentence\". Additionally, we start counting at 0.\r\n return self.corpus_lines - self.num_docs - 1\r\n\r\n def __getitem__(self, item):\r\n cur_id = self.sample_counter\r\n self.sample_counter += 1\r\n if not self.on_memory:\r\n # after one epoch we start again from beginning of file\r\n if cur_id != 0 and (cur_id % len(self) == 0):\r\n self.file.close()\r\n self.file = open(self.corpus_path, \"r\", encoding=self.encoding)\r\n\r\n t1, t2, is_next_label = self.random_sent(item)\r\n\r\n # tokenize\r\n tokens_a = self.tokenizer.tokenize(t1)\r\n tokens_b = self.tokenizer.tokenize(t2)\r\n\r\n # combine to one sample\r\n cur_example = InputExample(guid=cur_id, tokens_a=tokens_a, tokens_b=tokens_b, is_next=is_next_label)\r\n\r\n # transform sample to features\r\n cur_features = convert_example_to_features(cur_example, self.seq_len, self.tokenizer)\r\n\r\n cur_tensors = (torch.tensor(cur_features.input_ids),\r\n torch.tensor(cur_features.input_mask),\r\n torch.tensor(cur_features.segment_ids),\r\n torch.tensor(cur_features.lm_label_ids),\r\n torch.tensor(cur_features.is_next))\r\n\r\n return cur_tensors\r\n\r\n def random_sent(self, index):\r\n \"\"\"\r\n Get one sample from corpus consisting of two sentences. With prob. 50% these are two subsequent sentences\r\n from one doc. With 50% the second sentence will be a random one from another doc.\r\n :param index: int, index of sample.\r\n :return: (str, str, int), sentence 1, sentence 2, isNextSentence Label\r\n \"\"\"\r\n t1, t2 = self.get_corpus_line(index)\r\n if random.random() > 0.5:\r\n label = 0\r\n else:\r\n t2 = self.get_random_line()\r\n label = 1\r\n\r\n assert len(t1) > 0\r\n assert len(t2) > 0\r\n return t1, t2, label\r\n\r\n def get_corpus_line(self, item):\r\n \"\"\"\r\n Get one sample from corpus consisting of a pair of two subsequent lines from the same doc.\r\n :param item: int, index of sample.\r\n :return: (str, str), two subsequent sentences from corpus\r\n \"\"\"\r\n t1 = \"\"\r\n t2 = \"\"\r\n assert item < self.corpus_lines\r\n if self.on_memory:\r\n sample = self.sample_to_doc[item]\r\n t1 = self.all_docs[sample[\"doc_id\"]][sample[\"line\"]]\r\n t2 = self.all_docs[sample[\"doc_id\"]][sample[\"line\"] + 1]\r\n # used later to avoid random nextSentence from same doc\r\n self.current_doc = sample[\"doc_id\"]\r\n return t1, t2\r\n else:\r\n if self.line_buffer is None:\r\n # read first non-empty line of file\r\n while t1 == \"\":\r\n t1 = self.file.__next__().strip()\r\n t2 = self.file.__next__().strip()\r\n else:\r\n # use t2 from previous iteration as new t1\r\n t1 = self.line_buffer\r\n t2 = self.file.__next__().strip()\r\n # skip empty rows that are used for separating documents and keep track of current doc id\r\n while t2 == \"\" or t1 == \"\":\r\n t1 = self.file.__next__().strip()\r\n t2 = self.file.__next__().strip()\r\n self.current_doc = self.current_doc + 1\r\n self.line_buffer = t2\r\n\r\n assert t1 != \"\"\r\n assert t2 != \"\"\r\n return t1, t2\r\n\r\n def get_random_line(self):\r\n \"\"\"\r\n Get random line from another document for nextSentence task.\r\n :return: str, content of one line\r\n \"\"\"\r\n # Similar to original tf repo: This outer loop should rarely go for more than one iteration for large\r\n # corpora. However, just to be careful, we try to make sure that\r\n # the random document is not the same as the document we're processing.\r\n for _ in range(10):\r\n if self.on_memory:\r\n rand_doc_idx = random.randint(0, len(self.all_docs) - 1)\r\n rand_doc = self.all_docs[rand_doc_idx]\r\n line = rand_doc[random.randrange(len(rand_doc))]\r\n else:\r\n rand_index = random.randint(1, self.corpus_lines if self.corpus_lines < 1000 else 1000)\r\n # pick random line\r\n for _ in range(rand_index):\r\n line = self.get_next_line()\r\n # check if our picked random line is really from another doc like we want it to be\r\n if self.current_random_doc != self.current_doc:\r\n break\r\n return line\r\n\r\n def get_next_line(self):\r\n \"\"\" Gets next line of random_file and starts over when reaching end of file\"\"\"\r\n try:\r\n line = self.random_file.__next__().strip()\r\n # keep track of which document we are currently looking at to later avoid having the same doc as t1\r\n if line == \"\":\r\n self.current_random_doc = self.current_random_doc + 1\r\n line = self.random_file.__next__().strip()\r\n except StopIteration:\r\n self.random_file.close()\r\n self.random_file = open(self.corpus_path, \"r\", encoding=self.encoding)\r\n line = self.random_file.__next__().strip()\r\n return line\r\n\r\n\r\nclass InputExample(object):\r\n \"\"\"A single training/test example for the language model.\"\"\"\r\n\r\n def __init__(self, guid, tokens_a, tokens_b=None, is_next=None, lm_labels=None):\r\n \"\"\"Constructs a InputExample.\r\n Args:\r\n guid: Unique id for the example.\r\n tokens_a: string. The untokenized text of the first sequence. For single\r\n sequence tasks, only this sequence must be specified.\r\n tokens_b: (Optional) string. The untokenized text of the second sequence.\r\n Only must be specified for sequence pair tasks.\r\n label: (Optional) string. The label of the example. This should be\r\n specified for train and dev examples, but not for test examples.\r\n \"\"\"\r\n self.guid = guid\r\n self.tokens_a = tokens_a\r\n self.tokens_b = tokens_b\r\n self.is_next = is_next # nextSentence\r\n self.lm_labels = lm_labels # masked words for language model\r\n\r\n\r\nclass InputFeatures(object):\r\n \"\"\"A single set of features of data.\"\"\"\r\n\r\n def __init__(self, input_ids, input_mask, segment_ids, is_next, lm_label_ids):\r\n self.input_ids = input_ids\r\n self.input_mask = input_mask\r\n self.segment_ids = segment_ids\r\n self.is_next = is_next\r\n self.lm_label_ids = lm_label_ids\r\n\r\n\r\ndef random_word(tokens, tokenizer):\r\n \"\"\"\r\n Masking some random tokens for Language Model task with probabilities as in the original BERT paper.\r\n :param tokens: list of str, tokenized sentence.\r\n :param tokenizer: Tokenizer, object used for tokenization (we need it's vocab here)\r\n :return: (list of str, list of int), masked tokens and related labels for LM prediction\r\n \"\"\"\r\n output_label = []\r\n\r\n for i, token in enumerate(tokens):\r\n prob = random.random()\r\n # mask token with 15% probability\r\n if prob < 0.15:\r\n prob /= 0.15\r\n\r\n # 80% randomly change token to mask token\r\n if prob < 0.8:\r\n tokens[i] = \"[MASK]\"\r\n\r\n # 10% randomly change token to random token\r\n elif prob < 0.9:\r\n tokens[i] = random.choice(list(tokenizer.vocab.items()))[0]\r\n\r\n # -> rest 10% randomly keep current token\r\n\r\n # append current token to output (we will predict these later)\r\n try:\r\n output_label.append(tokenizer.vocab[token])\r\n except KeyError:\r\n # For unknown words (should not occur with BPE vocab)\r\n output_label.append(tokenizer.vocab[\"[UNK]\"])\r\n logger.warning(\"Cannot find token '{}' in vocab. Using [UNK] insetad\".format(token))\r\n else:\r\n # no masking token (will be ignored by loss function later)\r\n output_label.append(-1)\r\n\r\n return tokens, output_label\r\n\r\n\r\ndef convert_example_to_features(example, max_seq_length, tokenizer):\r\n \"\"\"\r\n Convert a raw sample (pair of sentences as tokenized strings) into a proper training sample with\r\n IDs, LM labels, input_mask, CLS and SEP tokens etc.\r\n :param example: InputExample, containing sentence input as strings and is_next label\r\n :param max_seq_length: int, maximum length of sequence.\r\n :param tokenizer: Tokenizer\r\n :return: InputFeatures, containing all inputs and labels of one sample as IDs (as used for model training)\r\n \"\"\"\r\n tokens_a = example.tokens_a\r\n tokens_b = example.tokens_b\r\n # Modifies `tokens_a` and `tokens_b` in place so that the total\r\n # length is less than the specified length.\r\n # Account for [CLS], [SEP], [SEP] with \"- 3\"\r\n _truncate_seq_pair(tokens_a, tokens_b, max_seq_length - 3)\r\n\r\n t1_random, t1_label = random_word(tokens_a, tokenizer)\r\n t2_random, t2_label = random_word(tokens_b, tokenizer)\r\n # concatenate lm labels and account for CLS, SEP, SEP\r\n lm_label_ids = ([-1] + t1_label + [-1] + t2_label + [-1])\r\n\r\n # The convention in BERT is:\r\n # (a) For sequence pairs:\r\n # tokens: [CLS] is this jack ##son ##ville ? [SEP] no it is not . [SEP]\r\n # type_ids: 0 0 0 0 0 0 0 0 1 1 1 1 1 1\r\n # (b) For single sequences:\r\n # tokens: [CLS] the dog is hairy . [SEP]\r\n # type_ids: 0 0 0 0 0 0 0\r\n #\r\n # Where \"type_ids\" are used to indicate whether this is the first\r\n # sequence or the second sequence. The embedding vectors for `type=0` and\r\n # `type=1` were learned during pre-training and are added to the wordpiece\r\n # embedding vector (and position vector). This is not *strictly* necessary\r\n # since the [SEP] token unambigiously separates the sequences, but it makes\r\n # it easier for the model to learn the concept of sequences.\r\n #\r\n # For classification tasks, the first vector (corresponding to [CLS]) is\r\n # used as as the \"sentence vector\". Note that this only makes sense because\r\n # the entire model is fine-tuned.\r\n tokens = []\r\n segment_ids = []\r\n tokens.append(\"[CLS]\")\r\n segment_ids.append(0)\r\n for token in tokens_a:\r\n tokens.append(token)\r\n segment_ids.append(0)\r\n tokens.append(\"[SEP]\")\r\n segment_ids.append(0)\r\n\r\n assert len(tokens_b) > 0\r\n for token in tokens_b:\r\n tokens.append(token)\r\n segment_ids.append(1)\r\n tokens.append(\"[SEP]\")\r\n segment_ids.append(1)\r\n\r\n input_ids = tokenizer.convert_tokens_to_ids(tokens)\r\n\r\n # The mask has 1 for real tokens and 0 for padding tokens. Only real\r\n # tokens are attended to.\r\n input_mask = [1] * len(input_ids)\r\n\r\n # Zero-pad up to the sequence length.\r\n while len(input_ids) < max_seq_length:\r\n input_ids.append(0)\r\n input_mask.append(0)\r\n segment_ids.append(0)\r\n lm_label_ids.append(-1)\r\n\r\n assert len(input_ids) == max_seq_length\r\n assert len(input_mask) == max_seq_length\r\n assert len(segment_ids) == max_seq_length\r\n assert len(lm_label_ids) == max_seq_length\r\n\r\n if example.guid < 5:\r\n logger.info(\"*** Example ***\")\r\n logger.info(\"guid: %s\" % (example.guid))\r\n logger.info(\"tokens: %s\" % \" \".join(\r\n [str(x) for x in tokens]))\r\n logger.info(\"input_ids: %s\" % \" \".join([str(x) for x in input_ids]))\r\n logger.info(\"input_mask: %s\" % \" \".join([str(x) for x in input_mask]))\r\n logger.info(\r\n \"segment_ids: %s\" % \" \".join([str(x) for x in segment_ids]))\r\n logger.info(\"LM label: %s \" % (lm_label_ids))\r\n logger.info(\"Is next sentence label: %s \" % (example.is_next))\r\n\r\n features = InputFeatures(input_ids=input_ids,\r\n input_mask=input_mask,\r\n segment_ids=segment_ids,\r\n lm_label_ids=lm_label_ids,\r\n is_next=example.is_next)\r\n return features\r\n\r\n\r\ndef main():\r\n parser = argparse.ArgumentParser()\r\n\r\n ## Required parameters\r\n parser.add_argument(\"--train_file\",\r\n default=None,\r\n type=str,\r\n required=True,\r\n help=\"The input train corpus.\")\r\n parser.add_argument(\"--test_file\",\r\n default=None,\r\n type=str,\r\n required=True,\r\n help=\"The input test corpus.\")\r\n\r\n parser.add_argument(\"--tokenizer_model\", default=None, type=str, required=True,\r\n help=\"tokenizer pre-trained model selected in the list: bert-base-uncased, \"\r\n \"bert-large-uncased, bert-base-cased, bert-base-multilingual, bert-base-chinese.\")\r\n\r\n parser.add_argument(\"--bert_model\", default=None, type=str, required=True,\r\n help=\"Bert pre-trained model selected in the list: bert-base-uncased, \"\r\n \"bert-large-uncased, bert-base-cased, bert-base-multilingual, bert-base-chinese.\")\r\n\r\n parser.add_argument(\"--config_file\", default=None, type=str, required=True,\r\n help=\"Bert pre-trained model selected in the list: bert-base-uncased, \"\r\n \"bert-large-uncased, bert-base-cased, bert-base-multilingual, bert-base-chinese.\")\r\n\r\n parser.add_argument(\"--output_dir\",\r\n default=None,\r\n type=str,\r\n required=True,\r\n help=\"The output directory where the model checkpoints will be written.\")\r\n ## Other parameters\r\n parser.add_argument(\"--max_seq_length\",\r\n default=128,\r\n type=int,\r\n help=\"The maximum total input sequence length after WordPiece tokenization. \\n\"\r\n \"Sequences longer than this will be truncated, and sequences shorter \\n\"\r\n \"than this will be padded.\")\r\n\r\n parser.add_argument(\"--train_batch_size\",\r\n default=32,\r\n type=int,\r\n help=\"Total batch size for training.\")\r\n\r\n parser.add_argument(\"--eval_batch_size\",\r\n default=32,\r\n type=int,\r\n help=\"Total batch size for eval.\")\r\n parser.add_argument(\"--learning_rate\",\r\n default=5e-5,\r\n type=float,\r\n help=\"The initial learning rate for Adam.\")\r\n\r\n parser.add_argument(\"--num_train_epochs\",\r\n default=4,\r\n type=float,\r\n help=\"Total number of training epochs to perform.\")\r\n\r\n parser.add_argument(\"--adam_epsilon\",\r\n default=1e-8,\r\n type=float,\r\n help=\"Proportion of training to perform linear learning rate warmup for. \"\r\n \"E.g., 0.1 = 10%% of training.\")\r\n\r\n parser.add_argument(\"--no_cuda\",\r\n action='store_true',\r\n help=\"Whether not to use CUDA when available\")\r\n\r\n parser.add_argument(\"--on_memory\",\r\n action='store_true',\r\n help=\"Whether to load train samples into memory or use disk\")\r\n\r\n parser.add_argument(\"--do_lower_case\",\r\n action='store_true',\r\n help=\"Whether to lower case the input text. True for uncased models, False for cased models.\")\r\n\r\n parser.add_argument(\"--local_rank\",\r\n type=int,\r\n default=-1,\r\n help=\"local_rank for distributed training on gpus\")\r\n\r\n parser.add_argument('--seed',\r\n type=int,\r\n default=42,\r\n help=\"random seed for initialization\")\r\n\r\n parser.add_argument('--gradient_accumulation_steps',\r\n type=int,\r\n default=1,\r\n help=\"Number of updates steps to accumualte before performing a backward/update pass.\")\r\n\r\n parser.add_argument('--fp16',\r\n action='store_true',\r\n help=\"Whether to use 16-bit float precision instead of 32-bit\")\r\n\r\n parser.add_argument('--loss_scale',\r\n type=float, default=0,\r\n help=\"Loss scaling to improve fp16 numeric stability. Only used when fp16 set to True.\\n\"\r\n \"0 (default value): dynamic loss scaling.\\n\"\r\n \"Positive power of 2: static loss scaling value.\\n\")\r\n\r\n args = parser.parse_args()\r\n\r\n if args.local_rank == -1 or args.no_cuda:\r\n device = torch.device(\"cuda\" if torch.cuda.is_available() and not args.no_cuda else \"cpu\")\r\n n_gpu = torch.cuda.device_count()\r\n else:\r\n torch.cuda.set_device(args.local_rank)\r\n device = torch.device(\"cuda\", args.local_rank)\r\n n_gpu = 1\r\n # Initializes the distributed backend which will take care of sychronizing nodes/GPUs\r\n torch.distributed.init_process_group(backend='nccl')\r\n logger.info(\"device: {} n_gpu: {}, distributed training: {}, 16-bits training: {}\".format(\r\n device, n_gpu, bool(args.local_rank != -1), args.fp16))\r\n\r\n if args.gradient_accumulation_steps < 1:\r\n raise ValueError(\"Invalid gradient_accumulation_steps parameter: {}, should be >= 1\".format(\r\n args.gradient_accumulation_steps))\r\n\r\n args.train_batch_size = int(args.train_batch_size / args.gradient_accumulation_steps)\r\n\r\n random.seed(args.seed)\r\n np.random.seed(args.seed)\r\n torch.manual_seed(args.seed)\r\n if n_gpu > 0:\r\n torch.cuda.manual_seed_all(args.seed)\r\n\r\n #if not args.do_train and not args.do_eval:\r\n # raise ValueError(\"At least one of `do_train` or `do_eval` must be True.\")\r\n\r\n if os.path.exists(args.output_dir) and os.listdir(args.output_dir):\r\n raise ValueError(\"Output directory ({}) already exists and is not empty.\".format(args.output_dir))\r\n os.makedirs(args.output_dir, exist_ok=True)\r\n\r\n # tokenizer = BertTokenizer.from_pretrained(args.bert_model, do_lower_case=args.do_lower_case)\r\n #tokenizer = XLNetTokenizer.from_pretrained(args.tokenizer_model)\r\n tokenizer = BertTokenizer.from_pretrained(args.tokenizer_model, do_lower_case=False)\r\n\r\n # train_examples = None\r\n num_train_steps = None\r\n\r\n print(\"Loading Train Dataset\", args.train_file)\r\n train_dataset = BERTDataset(args.train_file, tokenizer, seq_len=args.max_seq_length,\r\n corpus_lines=None, on_memory=args.on_memory)\r\n\r\n print(\"Loading eval Dataset\", args.test_file)\r\n eval_dataset = BERTDataset(args.test_file, tokenizer, seq_len=args.max_seq_length,\r\n corpus_lines=None, on_memory=args.on_memory)\r\n\r\n num_train_steps = int(\r\n len(train_dataset) / args.train_batch_size / args.gradient_accumulation_steps * args.num_train_epochs)\r\n\r\n # Prepare model\r\n\r\n model = BertForPreTraining.from_pretrained(\r\n args.bert_model,\r\n output_attentions=False,\r\n output_hidden_states=False,)\r\n model.to(device)\r\n\r\n\r\n if args.fp16:\r\n model.half()\r\n if args.local_rank != -1:\r\n try:\r\n from apex.parallel import DistributedDataParallel as DDP\r\n except ImportError:\r\n raise ImportError(\r\n \"Please install apex from https://www.github.com/nvidia/apex to use distributed and fp16 training.\")\r\n model = DDP(model)\r\n elif n_gpu > 1:\r\n model = torch.nn.DataParallel(model)\r\n\r\n # Prepare optimizer\r\n '''\r\n param_optimizer = list(model.named_parameters())\r\n no_decay = ['bias', 'LayerNorm.bias', 'LayerNorm.weight']\r\n optimizer_grouped_parameters = [\r\n {'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)], 'weight_decay': 0.01},\r\n {'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}\r\n ]\r\n if args.fp16:\r\n try:\r\n from apex.optimizers import FP16_Optimizer\r\n from apex.optimizers import FusedAdam\r\n except ImportError:\r\n raise ImportError(\r\n \"Please install apex from https://www.github.com/nvidia/apex to use distributed and fp16 training.\")\r\n\r\n optimizer = FusedAdam(optimizer_grouped_parameters,\r\n lr=args.learning_rate,\r\n bias_correction=False,\r\n max_grad_norm=1.0)\r\n if args.loss_scale == 0:\r\n optimizer = FP16_Optimizer(optimizer, dynamic_loss_scale=True)\r\n else:\r\n optimizer = FP16_Optimizer(optimizer, static_loss_scale=args.loss_scale)\r\n\r\n else:\r\n optimizer = AdamW(optimizer_grouped_parameters,\r\n lr=args.learning_rate,\r\n warmup=args.warmup_proportion,\r\n t_total=num_train_steps)\r\n '''\r\n #global_step = 0\r\n\r\n logger.info(\"***** Running training *****\")\r\n logger.info(\" Num examples = %d\", len(train_dataset))\r\n logger.info(\" Batch size = %d\", args.train_batch_size)\r\n logger.info(\" Num steps = %d\", num_train_steps)\r\n\r\n if args.local_rank == -1:\r\n train_sampler = SequentialSampler(train_dataset)\r\n eval_sampler = SequentialSampler(eval_dataset)\r\n\r\n else:\r\n # TODO: check if this works with current data generator from disk that relies on file.__next__\r\n # (it doesn't return item back by index)\r\n train_sampler = DistributedSampler(train_dataset)\r\n eval_sampler = DistributedSampler(eval_dataset)\r\n train_dataloader = DataLoader(train_dataset, sampler=train_sampler, batch_size=args.train_batch_size)\r\n eval_dataloader = DataLoader(eval_dataset, sampler=eval_sampler, batch_size=args.train_batch_size)\r\n\r\n #optimizer\r\n no_decay = ['bias', 'LayerNorm.weight']\r\n optimizer_grouped_parameters = [\r\n {'params': [p for n, p in model.named_parameters() if\r\n not any(nd in n for nd in no_decay)],\r\n 'weight_decay': 0.01},\r\n {'params': [p for n, p in model.named_parameters() if any(\r\n nd in n for nd in no_decay)], 'weight_decay': 0.0}\r\n ]\r\n optimizer = AdamW(optimizer_grouped_parameters, lr=args.learning_rate, eps=args.adam_epsilon)\r\n scheduler = get_linear_schedule_with_warmup(\r\n optimizer, 0, len(train_dataloader))\r\n\r\n model.train()\r\n tr_loss = 0\r\n global_step = 0\r\n acc = 0\r\n train_loss = 0.0\r\n nb_tr_examples, nb_tr_steps = 0, 0\r\n for _ in trange(int(args.num_train_epochs), desc=\"Epoch\"):\r\n for step, batch in enumerate(tqdm(train_dataloader, desc=\"Iteration\")):\r\n batch = tuple(t.to(device) for t in batch)\r\n input_ids, input_mask, segment_ids, lm_label_ids, is_next = batch\r\n outputs = model(input_ids=input_ids, attention_mask=input_mask, token_type_ids=segment_ids,\r\n labels=lm_label_ids, next_sentence_label=is_next)\r\n\r\n\r\n loss = outputs.loss\r\n '''\r\n if n_gpu > 1:\r\n loss = loss.mean() # mean() to average on multi-gpu.\r\n if args.gradient_accumulation_steps > 1:\r\n loss = loss / args.gradient_accumulation_steps\r\n if args.fp16:\r\n optimizer.backward(outputs.loss)\r\n else:\r\n loss.backward()\r\n '''\r\n loss.backward()\r\n tr_loss += loss.item()\r\n nb_tr_examples += input_ids.size(0)\r\n nb_tr_steps += 1\r\n torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=1)\r\n optimizer.step()\r\n scheduler.step()\r\n model.zero_grad()\r\n global_step += 1\r\n '''\r\n if (step + 1) % args.gradient_accumulation_steps == 0:\r\n # modify learning rate with special warm up BERT uses\r\n lr_this_step = args.learning_rate * warmup_linear(global_step / num_train_steps, args.warmup_proportion)\r\n for param_group in optimizer.param_groups:\r\n param_group['lr'] = lr_this_step\r\n optimizer.step()\r\n scheduler.step()\r\n optimizer.zero_grad()\r\n global_step += 1\r\n '''\r\n\r\n train_loss = tr_loss / global_step\r\n perplexity = torch.exp(torch.tensor(train_loss)).item()\r\n\r\n print(\"Training loss {} \".format(\"{:.3f}\".format(train_loss)))\r\n print(\"Training perplexity {}\".format(\"{:.3f}\".format(perplexity)))\r\n\r\n logger.info(\"***** Running evaluation *****\")\r\n logger.info(\" Num examples = %d\", len(eval_dataset))\r\n logger.info(\" Batch size = %d\", batch_size)\r\n eval_loss = 0.0\r\n acc = 0\r\n nb_eval_steps = 0\r\n for batch in tqdm_notebook(eval_dataloader, desc='Evaluating'):\r\n batch = tuple(t.to(device) for t in batch)\r\n input_ids, input_mask, segment_ids, lm_label_ids, is_next = batch\r\n with torch.no_grad():\r\n outputs = model(input_ids, segment_ids, input_mask, lm_label_ids, is_next)\r\n loss = outputs.loss\r\n eval_loss += loss.mean().item()\r\n nb_eval_steps += 1\r\n\r\n eval_loss = eval_loss / nb_eval_steps\r\n perplexity = torch.exp(torch.tensor(eval_loss)).item()\r\n\r\n print(\"Evalution loss {} \".format(\"{:.3f}\".format(eval_loss)))\r\n print(\"Evalution perplexity {}\".format(\"{:.3f}\".format(perplexity)))\r\n\r\n if not os.path.exists(output_dir):\r\n os.makedirs(output_dir)\r\n\r\n print(\"Saving model to %s\" % args.output_dir)\r\n\r\n # Save a trained model, configuration and tokenizer using `save_pretrained()`.\r\n # They can then be reloaded using `from_pretrained()`\r\n model_to_save = model.module if hasattr(model, 'module') else model # Take care of distributed/parallel training\r\n model_to_save.save_pretrained(args.output_dir)\r\n tokenizer.save_pretrained(args.output_dir)\r\n\r\n # Save a trained model\r\n #logger.info(\"** ** * Saving fine - tuned model ** ** * \")\r\n model_to_save = model.module if hasattr(model, 'module') else model # Only save the model it-self\r\n #if args.do_train:\r\n # model_to_save.save_pretrained(self.output_dir)\r\n # tokenizer.save_pretrained(self.output_dir)\r\n\r\n\r\ndef _truncate_seq_pair(tokens_a, tokens_b, max_length):\r\n \"\"\"Truncates a sequence pair in place to the maximum length.\"\"\"\r\n\r\n # This is a simple heuristic which will always truncate the longer sequence\r\n # one token at a time. This makes more sense than truncating an equal percent\r\n # of tokens from each, since if one sequence is very short then each token\r\n # that's truncated likely contains more information than a longer sequence.\r\n while True:\r\n total_length = len(tokens_a) + len(tokens_b)\r\n if total_length <= max_length:\r\n break\r\n if len(tokens_a) > len(tokens_b):\r\n tokens_a.pop()\r\n else:\r\n tokens_b.pop()\r\n\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```\r\n\r\n```\r\n######Message Error#########\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [0,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [2,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [3,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [4,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [5,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [6,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [8,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [10,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [11,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [13,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [14,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [15,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [16,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [17,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [18,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [20,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [21,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [22,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [23,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [24,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [25,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [26,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [29,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [30,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [31,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\nTHCudaCheck FAIL file=/pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu line=115 error=710 : device-side assert triggered\r\nIteration: 0% 0/8312 [00:00<?, ?it/s]\r\nEpoch: 0% 0/4 [00:00<?, ?it/s]\r\nTraceback (most recent call last):\r\n File \"/content/run.py\", line 748, in <module>\r\n main()\r\n File \"/content/run.py\", line 651, in main\r\n labels=lm_label_ids, next_sentence_label=is_next)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py\", line 727, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/models/bert/modeling_bert.py\", line 955, in forward\r\n masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1))\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py\", line 727, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/loss.py\", line 962, in forward\r\n ignore_index=self.ignore_index, reduction=self.reduction)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py\", line 2468, in cross_entropy\r\n return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py\", line 2264, in nll_loss\r\n ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)\r\nRuntimeError: cuda runtime error (710) : device-side assert triggered at /pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:115\r\n```",
"You set your MLM labels to -1 when padding. You should set them to -100 if you want them to be ignored. See the [docs](https://huggingface.co/transformers/model_doc/bert.html#transformers.BertForPreTraining.forward).",
"same error with -100 padding but in this line lm_label_ids = ([-1] + t1_label + [-1] + t2_label + [-1]) when i set -1 to -100 my script run as well but i do not this way is correct or not ?",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,610 | 1,614 | 1,614 | NONE | null | when i fine tuning bert using BertForPreTraining i got error here --> outputs = model(input_ids=input_ids, attention_mask=input_mask, token_type_ids=segment_ids,
labels=lm_label_ids, next_sentence_label=is_next)
and in this line also loss.backward()
i got this error RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle) and this error occur when shape of prediction not equal labels but i checked shape like this len(prediction_logits) == len(lm_label_ids) and same shape
what is the problem ?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9547/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9547/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9546 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9546/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9546/comments | https://api.github.com/repos/huggingface/transformers/issues/9546/events | https://github.com/huggingface/transformers/issues/9546 | 784,587,186 | MDU6SXNzdWU3ODQ1ODcxODY= | 9,546 | Entity level F-1 scores in run_ner.py | {
"login": "pranav-s",
"id": 9393002,
"node_id": "MDQ6VXNlcjkzOTMwMDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/9393002?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pranav-s",
"html_url": "https://github.com/pranav-s",
"followers_url": "https://api.github.com/users/pranav-s/followers",
"following_url": "https://api.github.com/users/pranav-s/following{/other_user}",
"gists_url": "https://api.github.com/users/pranav-s/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pranav-s/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pranav-s/subscriptions",
"organizations_url": "https://api.github.com/users/pranav-s/orgs",
"repos_url": "https://api.github.com/users/pranav-s/repos",
"events_url": "https://api.github.com/users/pranav-s/events{/privacy}",
"received_events_url": "https://api.github.com/users/pranav-s/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
] | [
"Oh this makes me realize this script hasn't been switched to use the datasets metrics for seqeval. This would solve the issue as it computes all scores. Will do that tomorrow.",
"Thank you @sgugger"
] | 1,610 | 1,610 | 1,610 | NONE | null | # 🚀 Feature request
The run_ner.py script in examples/token_classification reports evaluation metrics in terms of token F-1 scores (from what I can tell by examining the code). This issue is to request entity level F-1 scores as an evaluation metric. Token level scores will evaluate the F-1 score over an input such as ["I", "work", "for", "ABC", "com", "##pany", "in", "New", "York", "City"] with the label for each token considered separately. In this example, if the ground truth labels are ["O", "O", "O", "ORG", "O", "O", "O", "GPE", "GPE", "GPE"] the label for "New" , "York" and "City" from the predictions would be compared against ground truth separately. Entity level scores would mark a true positive if for instance all 3 tokens in the span "New York City" are labelled correctly.
## Motivation
Token level F-1 scores can be a more lenient metric compared to entity level scores since labels on sub-words/entities are considered separately.
## Contribution
I can also help implement this but I am not certain how I should get started with this. Any help is appreciated. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9546/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9546/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9545 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9545/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9545/comments | https://api.github.com/repos/huggingface/transformers/issues/9545/events | https://github.com/huggingface/transformers/pull/9545 | 784,585,169 | MDExOlB1bGxSZXF1ZXN0NTUzNzU0NDUx | 9,545 | Doc: Update pretrained_models wording | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,610 | 1,610 | 1,610 | MEMBER | null | To clarify things cf. this tweet for instance https://twitter.com/RTomMcCoy/status/1349094111505211395
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9545/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9545/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9545",
"html_url": "https://github.com/huggingface/transformers/pull/9545",
"diff_url": "https://github.com/huggingface/transformers/pull/9545.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9545.patch",
"merged_at": 1610535486000
} |
https://api.github.com/repos/huggingface/transformers/issues/9544 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9544/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9544/comments | https://api.github.com/repos/huggingface/transformers/issues/9544/events | https://github.com/huggingface/transformers/issues/9544 | 784,582,811 | MDU6SXNzdWU3ODQ1ODI4MTE= | 9,544 | RFC: ternary assignment style in transformers code revisited | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"here is @patrickvonplaten's follow up posted with his permission (we initially did it over gist):\r\n\r\n------------------\r\n\r\n\r\nThanks for the write-up!\r\nI guess everybody has a slightly different opinion regarding ternary assignmens . My opinion is:\r\n\r\n1) I would never write a if-elif-else statement in one line (I don't think we have many lines like this in transformers)\r\n2) I do use the ternary assignment quite a lot, but only for IMO \"simple\" statements like:\r\n```do_sample = do_sample if do_sample is not None else self.config.do_sample``` or \r\n```all_attentions = () if output_attentions else None```\r\nI would also use it for your example above ``` self.model = model.module if args.deepspeed else model``` I guess, but I wouldn't use it for complexer statements, such as this one: https://github.com/huggingface/transformers/blob/61443cd7d917ef323a799ee27bb4abc4344f0d11/src/transformers/models/t5/modeling_t5.py#L887 *e.g.*.\r\n\r\n3) I do look quite a bit into how the coding style is in the respective file. E.g. `generation_utils.py` has a slightly different style from the `modling_...py` files from `trainer.py` IMO -> so I try to adapt here. *E.g.* I think Quentin likes to use statements like ```dataset = self.manual_data or self.default_data``` in datasets which I then adapt to, but that's something we don't do in transformers (not really sure why)\r\n4) I think code design also depends quite a bit on the coding environment someone has set up for him/herself. I'm using a rather special neovim+tmux+zsh setup, which I think shows the code differently to all the vscode users. *E.g.* I don't mind having long lines (>119) because it's nicely displayed in Vim for me, but Sylvain is not a big fan of it afaik.\r\n\r\nBut I'm happy to have some stricter rules on code design! We should probably include Sylvain and Lysandre then as well.\r\n\r\nSome other conventions we could/should standardize:\r\n\r\n- Only use f-strings => Sylvain really wants us to use f-strings and I also think that they are the nicest design\r\n- stricter ordering of where docstring, helper functions and classes should be in the `modeling_....py` files\r\n- I don't really like nested if-else statements. E.g. I prefer:\r\n ```python\r\n if a and b:\r\n # ...\r\nelif a and not b:\r\n # ...\r\nelif not a and b:\r\n # ...\r\nelse:\r\n # ...\r\n```\r\n\r\nvery much over\r\n\r\n```python\r\nif a:\r\n if b:\r\n # ...\r\n else: \r\n # ...\r\nelse:\r\n if b:\r\n # ... \r\n else:\r\n # ...\r\n```\r\n\r\n- I don't like it when variable have one-letter names. I working quite heavily with search and search-and-replace patterns in vim to understand/refactor code and I think they are not very readable => so I think it's always better to have at least some what understandable variable names. *e.g.* even if everybody knows the q,k,v logic in Transformers IMO, `query_states`, `value_states` and `key_states` are better names.",
"I personally don't think the ternary style is harder to read when it's for a very simple condition, on the contrary. Like Patrick said, I would not use it for a `if-elif-else statement` as it then is harder to read and understand, but for something like \r\n```\r\n if args.deepspeed:\r\n self.model = model.module\r\n else:\r\n self.model = model\r\n```\r\nis typically the situation where I would encourage a ternary line. \r\n\r\nThe only other situation I don't use them is if the formatter gets in the way because the line is long, as it's then clearer in the unrolled version.\r\n\r\n> Another advantage of unwrapped ternary op is very noticeable during interactive debug sessions - you can't easily break or step through such one-liners\r\n\r\nIf only used for simple tests like I mentioned above, this should be a no-problem.\r\n\r\nI agree with patrick all other comments (especially the f-strings! if there is one thing my brain has trouble parsing, it's the `.format(...)`. And I would extend one-letter variable names to non-standard abbreviations in general, as it makes the code harder to read for non-native English speakers.\r\n\r\nHowever, fixing current files to fix those guidelines is not a priority IMO. I would put finishing the decoupling of the models (removing things like the `Summary` class in modeling_utils) above and it's already low priority for me. We're not many core maintainers and there is lots to do!",
"Thank you for sharing your preferences\r\n\r\nOh, in no way I was suggesting that we need to change anything. I'm just observing the readability impact for myself and was curious to hear whether others find it the same. But so far clearly it's not the case.\r\n\r\nHaving coded most of my programming life in Perl I guess I got used to aligning things vertically the way that made them most readable, since Perl has no indentation requirements, so I have always aligned assignments and branches for the fastest possible reading.\r\n\r\nBut, I have no problem with the ternary assignment since that's the style of this project and you seem to prefer it, and so that it's important to remain consistent.\r\n\r\nI am totally with you on the f-strings, - I was trying to keep this focused to just one subject matter, but if you'd like to expand it to other style issues, we can easily do that.\r\n\r\nWe can also close it this rfc at any time, if you feel there is nothing else that needs to be said or done, as the two of you expressing that you like ternary style is sufficient to not needing to continue.\r\n",
"Thanks for bringing this issue! Like Sylvain, I personally don't think the ternary style is harder to read. As you've said @stas00, I think it depends on the language one is used to; since I like to say that Python can nearly be \"read\" like prose or natural language, this is the case where it shines:\r\n\r\nThe following statement is way closer to natural language\r\n```py\r\nself.model = model.module if args.deepspeed else model\r\n```\r\n\r\nthan the following\r\n```py\r\nif args.deepspeed:\r\n self.model = model.module\r\nelse:\r\n self.model = model\r\n```\r\n\r\neven if the latter will probably be easier to read to users coming from different languages than Python.\r\n\r\nRegarding the `map`/`filter` and other lambda methods, this is a very personal choice but they're (usually!) harder to read than the list/dict comprehensions that can replace. Once again, this is a very opinionated statement.",
"Thank you all for your feedback. \r\n\r\nIt's loud and clear ternary ops are the norm at this project, with the recommendation to avoid nested ternary ops in the new future code."
] | 1,610 | 1,610 | 1,610 | CONTRIBUTOR | null | I run this by @patrickvonplaten and he encouraged I post it here, and he wrote a follow up which I will post next.
---------------------------
As `transformers`'s mandate is to be super user-friendly code-wise I wanted to ask whether very frequently used in the `transformers` code ternary assignment `a = x if z else y` actually supports that mandate.
It is used a lot:
```
grep -Ir if src/transformers | grep else | grep = | wc -l
1043
```
As I was just writing some code where I had:
```python
if args.deepspeed:
self.model = model.module
else:
self.model = model
```
I then rewrote it in the `transformers` style of:
```python
self.model = model.module if args.deepspeed else model
```
and then I realized that my original code is a way more readable and my "internal" compiler instantly gets it and moves on, whereas the ternary style is super slow to ingest. It could be just me, for me vertical alignment helps reading code a lot!
Surely, that's 1 line vs 4. So each of them has their pros and cons.
I'm sure it won't be too hard to find much less readable nested ternary assignment code in `transformers`, e.g.:
```python
self.device = device if framework == "tf" else torch.device("cpu" if device < 0 else "cuda:{}".format(device))
```
as compared to rewriting it as:
```python
if framework == "tf":
self.device = device
elif device < 0:
self.device = torch.device("cpu")
else:
self.device = torch.device("cuda:{}".format(device))
```
- Does it take many more lines and fits less code into the screen - hell yeah
- Is it much more readable - IMHO absolutely! Especially due to the vertical alignment
- Is it less error-prone - very likely.
I'd have even split the `elif` to clearly see that a different group of conditionals is being tested in the second part, but that's just personal style.
Binary search always beats linear search, even for a small number of items.
Another advantage of unwrapped ternary op is very noticeable during interactive debug sessions - you can't easily break or step through such one-liners, especially when the juggled values aren't variables but function calls.
I just find it somewhat inconsistent that this developer collective tries hard to avoid `map`, `filter` and `reduce` as more difficult to read, yet ternary style is used very often. To me the 3 mentioned operators are in the same category as ternary operators readability-wise since they require horizontal reading. That's just my perception of course.
This is not a critique but rather a question of whether this part of style is intentional or just came about because someone likes vertically compact code and is good at reading horizontal logic. At the end of the day, this is not a deal breaker, it just takes me much longer to get such code.
Thank you.
@LysandreJik, @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9544/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9544/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9543 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9543/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9543/comments | https://api.github.com/repos/huggingface/transformers/issues/9543/events | https://github.com/huggingface/transformers/issues/9543 | 784,425,573 | MDU6SXNzdWU3ODQ0MjU1NzM= | 9,543 | Generating sequence from two input sequences | {
"login": "MiriamFarber",
"id": 35157503,
"node_id": "MDQ6VXNlcjM1MTU3NTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/35157503?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MiriamFarber",
"html_url": "https://github.com/MiriamFarber",
"followers_url": "https://api.github.com/users/MiriamFarber/followers",
"following_url": "https://api.github.com/users/MiriamFarber/following{/other_user}",
"gists_url": "https://api.github.com/users/MiriamFarber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MiriamFarber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MiriamFarber/subscriptions",
"organizations_url": "https://api.github.com/users/MiriamFarber/orgs",
"repos_url": "https://api.github.com/users/MiriamFarber/repos",
"events_url": "https://api.github.com/users/MiriamFarber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MiriamFarber/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead? You'll get more answers there.\r\n\r\nThanks!"
] | 1,610 | 1,610 | 1,610 | NONE | null | The code here
https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune_trainer.py
enables training `distilbart-xsum-12-6` model that gets an input sequence and outputs an input sequence. Is there a simple way to adapt the code to enable it to get two input sequences (e.g sentence and context sentence) and output a sequence?
It seems to me that I need to use something like that: https://huggingface.co/transformers/preprocessing.html#preprocessing-pairs-of-sentences, but wasn't sure how to combine it with the bart code.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9543/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9543/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9542 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9542/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9542/comments | https://api.github.com/repos/huggingface/transformers/issues/9542/events | https://github.com/huggingface/transformers/issues/9542 | 784,358,243 | MDU6SXNzdWU3ODQzNTgyNDM= | 9,542 | Is the GPT-2 forward too different from Bert or RoBerta? | {
"login": "celsofranssa",
"id": 11181748,
"node_id": "MDQ6VXNlcjExMTgxNzQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/11181748?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/celsofranssa",
"html_url": "https://github.com/celsofranssa",
"followers_url": "https://api.github.com/users/celsofranssa/followers",
"following_url": "https://api.github.com/users/celsofranssa/following{/other_user}",
"gists_url": "https://api.github.com/users/celsofranssa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/celsofranssa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/celsofranssa/subscriptions",
"organizations_url": "https://api.github.com/users/celsofranssa/orgs",
"repos_url": "https://api.github.com/users/celsofranssa/repos",
"events_url": "https://api.github.com/users/celsofranssa/events{/privacy}",
"received_events_url": "https://api.github.com/users/celsofranssa/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"the `IndexError` is from the padding token being added. GPT2 doesn't have a padding token. You should be able to manually set that input_id to 0 (or any other valid input id) and then rely on the attention mask to ignore those positions. ",
"> the `IndexError` is from the padding token being added. GPT2 doesn't have a padding token. You should be able to manually set that input_id to 0 (or any other valid input id) and then rely on the attention mask to ignore those positions.\r\n\r\nBut I had already added the `PAD` token which receives the `token_id = 50257`:\r\n```\r\ntokenizer.add_special_tokens({'pad_token': '[PAD]'})\r\n```",
"that tells the tokenizer to add a token and creates the new token id, but it doesn't modify the embedding layer of the model. the new token id is still invalid for the embedding layer of GPT-2 (which does not include a pad token). the reason it works for roberta and bert is b/c they were trained with pad tokens and therefore have entries in their embedding layers for that token. you want to do something like, \r\n\r\n`inputs[inputs==50257] == 0`",
"Hi! In the [documentation of `add_special_tokens`](https://huggingface.co/transformers/internal/tokenization_utils.html?highlight=resize_token_embeddings#transformers.tokenization_utils_base.SpecialTokensMixin.add_special_tokens), you'll see in the sample code the following line:\r\n\r\n```py\r\nmodel.resize_token_embeddings(len(tokenizer))\r\n```\r\n\r\nas @galtay mentions, you need to resize the embedding layer when adding tokens to the tokenizer, otherwise the model will not know that its embedding matrix has been resized.",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,610 | 1,614 | 1,614 | NONE | null | I am using some Transformers (Bert, RoBerta, etc.) in my project.
When including the `GPT-2` as shown below:
```python
from transformers import GPT2Tokenizer, GPT2Model
import torch
# inits model and tokenizer
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
model = GPT2Model.from_pretrained('gpt2')
# inputs
inputs = tokenizer.encode(text="Hello, my dog is cute", max_length=12, padding="max_length",
truncation=True)
#[15496, 11, 616, 3290, 318, 13779, 50257, 50257, 50257, 50257, 50257, 50257]
# input_ids tensor and attention mask
features = torch.tensor([inputs])
# tensor([[15496, 11, 616, 3290, 318, 13779, 50257, 50257, 50257, 50257,
# 50257, 50257]])
attention_mask = (features < 50257).int()
# tensor([[1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0]], dtype=torch.int32)
# outputs
outputs = model(
input_ids=features,
attention_mask=attention_mask
)
```
I have had the following error:
```python
IndexError Traceback (most recent call last)
<ipython-input-31-763ba5835cf8> in <module>()
----> 1 outputs = model(input_ids=features, attention_mask=attention_mask)
4 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
/usr/local/lib/python3.6/dist-packages/transformers/models/gpt2/modeling_gpt2.py in forward(self, input_ids, past_key_values, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, use_cache, output_attentions, output_hidden_states, return_dict)
679
680 if inputs_embeds is None:
--> 681 inputs_embeds = self.wte(input_ids)
682 position_embeds = self.wpe(position_ids)
683 hidden_states = inputs_embeds + position_embeds
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/sparse.py in forward(self, input)
124 return F.embedding(
125 input, self.weight, self.padding_idx, self.max_norm,
--> 126 self.norm_type, self.scale_grad_by_freq, self.sparse)
127
128 def extra_repr(self) -> str:
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1850 # remove once script supports set_grad_enabled
1851 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1852 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1853
1854
IndexError: index out of range in self
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9542/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9542/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9541 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9541/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9541/comments | https://api.github.com/repos/huggingface/transformers/issues/9541/events | https://github.com/huggingface/transformers/pull/9541 | 784,357,016 | MDExOlB1bGxSZXF1ZXN0NTUzNTYxODcx | 9,541 | Fix fill mask pipeline slow test using deprecated argument | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,610 | 1,610 | 1,610 | MEMBER | null | Uses `topk` which was deprecated in removed in favor of `top_k` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9541/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9541/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9541",
"html_url": "https://github.com/huggingface/transformers/pull/9541",
"diff_url": "https://github.com/huggingface/transformers/pull/9541.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9541.patch",
"merged_at": 1610486490000
} |
https://api.github.com/repos/huggingface/transformers/issues/9540 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9540/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9540/comments | https://api.github.com/repos/huggingface/transformers/issues/9540/events | https://github.com/huggingface/transformers/issues/9540 | 784,323,657 | MDU6SXNzdWU3ODQzMjM2NTc= | 9,540 | bounding by compute, retraining from the time the model is killed | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,610 | 1,614 | 1,614 | NONE | null | Hi
I have limited access to GPUs, with limited hours, I am using finetune_trainer.py is there a way I can retrain the models from the time it is killed? could you assist me and give me some hints?
thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9540/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9540/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9539 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9539/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9539/comments | https://api.github.com/repos/huggingface/transformers/issues/9539/events | https://github.com/huggingface/transformers/pull/9539 | 784,299,047 | MDExOlB1bGxSZXF1ZXN0NTUzNTEyNzUy | 9,539 | LayoutLM Config | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,610 | 1,610 | 1,610 | MEMBER | null | The `LayoutLM` configuration inherits from `BertConfig`, which should be fixed.
This was failing the `test_parents_and_children_in_mappings` test as it was placed after the `BertConfig` in the `AutoModelForSequenceClassification` test. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9539/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9539/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9539",
"html_url": "https://github.com/huggingface/transformers/pull/9539",
"diff_url": "https://github.com/huggingface/transformers/pull/9539.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9539.patch",
"merged_at": 1610463831000
} |
https://api.github.com/repos/huggingface/transformers/issues/9538 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9538/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9538/comments | https://api.github.com/repos/huggingface/transformers/issues/9538/events | https://github.com/huggingface/transformers/pull/9538 | 784,288,024 | MDExOlB1bGxSZXF1ZXN0NTUzNTAzNDA4 | 9,538 | fix BlenderbotSmallTokenizer | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,610 | 1,610 | 1,610 | MEMBER | null | # What does this PR do?
`BlenderbotSmallTokenizer` returns `token_type_ids` but those are not needed by the model. This PR fixes the tokniezer to not return `token_type_ids` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9538/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9538/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9538",
"html_url": "https://github.com/huggingface/transformers/pull/9538",
"diff_url": "https://github.com/huggingface/transformers/pull/9538.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9538.patch",
"merged_at": 1610515424000
} |
https://api.github.com/repos/huggingface/transformers/issues/9537 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9537/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9537/comments | https://api.github.com/repos/huggingface/transformers/issues/9537/events | https://github.com/huggingface/transformers/issues/9537 | 784,243,952 | MDU6SXNzdWU3ODQyNDM5NTI= | 9,537 | BertForTokenClassificiation save | {
"login": "Foysal87",
"id": 26604531,
"node_id": "MDQ6VXNlcjI2NjA0NTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/26604531?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Foysal87",
"html_url": "https://github.com/Foysal87",
"followers_url": "https://api.github.com/users/Foysal87/followers",
"following_url": "https://api.github.com/users/Foysal87/following{/other_user}",
"gists_url": "https://api.github.com/users/Foysal87/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Foysal87/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Foysal87/subscriptions",
"organizations_url": "https://api.github.com/users/Foysal87/orgs",
"repos_url": "https://api.github.com/users/Foysal87/repos",
"events_url": "https://api.github.com/users/Foysal87/events{/privacy}",
"received_events_url": "https://api.github.com/users/Foysal87/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You can save a model using the `.save_pretrained()` method. So given that your model is called `model`, you can save it as follows:\r\n\r\n`model.save_pretrained(path_to_directory) `",
"Thank you for your reply. I already tried it. But got an error like this.\r\n\r\n```\r\n'BertForTokenClassification' object has no attribute 'save_pretrained'\r\n```\r\n",
"Can you share some more code about how you created the model?",
"```\r\nfrom pytorch_pretrained_bert import BertForTokenClassification\r\nmodel = BertForTokenClassification.from_pretrained(\"bert-base-cased\", num_labels=len(tag2idx))\r\n```\r\n\r\nafter that fine-tuning it. Then want to save this model.",
"ok, I got it.. I can save it by torch.. thank you"
] | 1,610 | 1,610 | 1,610 | NONE | null | How can I save the BertForTokenClassificiation model?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9537/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9537/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9536 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9536/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9536/comments | https://api.github.com/repos/huggingface/transformers/issues/9536/events | https://github.com/huggingface/transformers/pull/9536 | 784,242,526 | MDExOlB1bGxSZXF1ZXN0NTUzNDY0NjU4 | 9,536 | [WIP][EncoderDecoder] Fix label behavior | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,610 | 1,615 | 1,614 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9536/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9536/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9536",
"html_url": "https://github.com/huggingface/transformers/pull/9536",
"diff_url": "https://github.com/huggingface/transformers/pull/9536.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9536.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9535 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9535/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9535/comments | https://api.github.com/repos/huggingface/transformers/issues/9535/events | https://github.com/huggingface/transformers/issues/9535 | 784,237,274 | MDU6SXNzdWU3ODQyMzcyNzQ= | 9,535 | strange output of fast/slow tokenizers | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@chiapas , We will review this issue and propose code changes soon.",
"@chiapas , I executed the same code in GoogleColab. I am getting the same tokens for both of them. Here is the output.\r\n\r\n\r\n`{'input_ids': [[0, 2203, 23708, 441, 9, 7, 653, 39, 19437, 7, 7, 7, 347, 4526, 34837, 38, 981, 58, 238, 10060, 128500, 31, 3337, 1374, 58, 238, 10060, 58, 981, 58, 238, 10060, 2]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]}\r\n{'input_ids': [[0, 2203, 23708, 441, 9, 7, 653, 39, 19437, 7, 7, 7, 347, 4526, 34837, 38, 981, 58, 238, 10060, 128500, 31, 3337, 1374, 58, 238, 10060, 58, 981, 58, 238, 10060, 2]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]}\r\nno substring with 1 char fewer cause problem, stopped.`\r\n\r\nCan you please give me more details about the issue ?",
"Looks like this issue doesn't occurs if I upgrade my python version to `3.6.9` from `3.6.6`. I am not sure why there is a problem when this code sample is executed with python `3.6.6`, but since it is quite old version, I won't ask further investigation, and this issue could be closed."
] | 1,610 | 1,610 | 1,610 | COLLABORATOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.1.1
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.6.6
- PyTorch version (GPU?): 1.5.0+cpu (False)
- Tensorflow version (GPU?): 2.3.1 (True)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
tokenizers: @mfuntowicz
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
@thomwolf
@Narsil
-->
## Information
First of all, this might be a problem of the fast tokenizer, ,but I am not 100% sure, because the bug occurs when I use `AutoTokenizer`, for which the code is in `transformers`. I didn't check the usage directly from the tokenizer's object.
Second, the problematic string is not meaningful. I am glad to see the fast tokenizer is available for XLM-Roberta, and I prefer to make sure it works as expected. So I compared the results from the slow/fast tokenizers in order to make sure the results are the same.
However, with this string (without the leading/training single quote)
'=LLC-s nmrcsss mtiaiol!@"ccc technooay @"ccc"@"ccc'
The result is different, you can see the output below.
Fast tokenizer
```
{'input_ids': [[0, 2203, 23708, 441, 9, 7, 653, 39, 19437, 7, 7, 7, 347, 4526, 34837, 38, 981, 58, 238, 10060, 128500, 31, 3337, 1374, 58, 238, 10060, 58, 981, 58, 238, 10060, 2]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]}
```
Slow tokenizer
```
{'input_ids': [[0, 2203, 23708, 441, 9, 7, 653, 39, 19437, 7, 7, 7, 347, 4526, 34837, 38, 981, 58, 238, 10060, 128500, 31, 3337, 1374, 58, 238, 10060, 58, 981, 58, 10060, 238, 2]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]}
```
The encoding at the end are
`... 58, 238, 10060, 2` and `... 58, 10060, 238, 2` respectively.
Furthermore, if I remove any character from it, the results become the same of these 2 tokenizers.
## To reproduce
Steps to reproduce the behavior:
```
from transformers import AutoTokenizer
tokenizer_fast = AutoTokenizer.from_pretrained("xlm-roberta-large", use_fast=True)
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large", use_fast=False)
# This string cause problem.
s = '=LLC-s nmrcsss mtiaiol!@"ccc technooay @"ccc"@"ccc'
o1 = tokenizer_fast.batch_encode_plus([s])
o2 = tokenizer.batch_encode_plus([s])
if not o1 == o2:
print('output are different!')
print(f'string: {s}')
print(o1)
print(o2)
# check substring
s2 = s
m = 0
while True:
m += 1
if m > 100000:
break
n = len(s2)
for i in range(0, n):
# substring of one char removed
s_temp = s2[0:i] + s2[i+1:]
o1 = tokenizer_fast.batch_encode_plus([s_temp])
o2 = tokenizer.batch_encode_plus([s_temp])
if not o1 == o2:
print(s_temp)
print('-------------------')
s2 = s_temp
break
if len(s2) == n:
print('no substring with 1 char fewer cause problem, stopped.')
break
```
Output:
```
output are different!
string: =LLC-s nmrcsss mtiaiol!@"ccc technooay @"ccc"@"ccc
{'input_ids': [[0, 2203, 23708, 441, 9, 7, 653, 39, 19437, 7, 7, 7, 347, 4526, 34837, 38, 981, 58, 238, 10060, 128500, 31, 3337, 1374, 58, 238, 10060, 58, 981, 58, 238, 10060, 2]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]}
{'input_ids': [[0, 2203, 23708, 441, 9, 7, 653, 39, 19437, 7, 7, 7, 347, 4526, 34837, 38, 981, 58, 238, 10060, 128500, 31, 3337, 1374, 58, 238, 10060, 58, 981, 58, 10060, 238, 2]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]}
no substring with 1 char fewer cause problem, stopped.
Process finished with exit code 0
```
## Expected behavior
The results are expected to be the same.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9535/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9535/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9534 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9534/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9534/comments | https://api.github.com/repos/huggingface/transformers/issues/9534/events | https://github.com/huggingface/transformers/issues/9534 | 784,141,444 | MDU6SXNzdWU3ODQxNDE0NDQ= | 9,534 | Need clarification in /examples/research_projects/rag/use_own_knowledge_dataset.py | {
"login": "mayank31398",
"id": 32954280,
"node_id": "MDQ6VXNlcjMyOTU0Mjgw",
"avatar_url": "https://avatars.githubusercontent.com/u/32954280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mayank31398",
"html_url": "https://github.com/mayank31398",
"followers_url": "https://api.github.com/users/mayank31398/followers",
"following_url": "https://api.github.com/users/mayank31398/following{/other_user}",
"gists_url": "https://api.github.com/users/mayank31398/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mayank31398/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mayank31398/subscriptions",
"organizations_url": "https://api.github.com/users/mayank31398/orgs",
"repos_url": "https://api.github.com/users/mayank31398/repos",
"events_url": "https://api.github.com/users/mayank31398/events{/privacy}",
"received_events_url": "https://api.github.com/users/mayank31398/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Indeed, the models have no model card. Maybe @lhoestq can help you out! ",
"Hi !\r\n- 'facebook/dpr-ctx_encoder-single-nq-base' is the DPR context encoder model trained on NQ alone\r\n- 'facebook/dpr-ctx_encoder-multiset-base' is the DPR context encoder model trained on the multiset/hybrid dataset defined in the paper. It includes Natural Questions, TriviaQA, WebQuestions and CuratedTREC",
"Thanks for clarifying @lhoestq "
] | 1,610 | 1,610 | 1,610 | CONTRIBUTOR | null | Please explain the difference between:
'facebook/dpr-ctx_encoder-single-nq-base' and 'facebook/dpr-ctx_encoder-multiset-base'
Which datasets are the 2 models trained on? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9534/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9534/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9533 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9533/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9533/comments | https://api.github.com/repos/huggingface/transformers/issues/9533/events | https://github.com/huggingface/transformers/issues/9533 | 784,124,443 | MDU6SXNzdWU3ODQxMjQ0NDM= | 9,533 | xla_spawn.py crashes when training on TPU V3-32 | {
"login": "avacaondata",
"id": 35173563,
"node_id": "MDQ6VXNlcjM1MTczNTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/avacaondata",
"html_url": "https://github.com/avacaondata",
"followers_url": "https://api.github.com/users/avacaondata/followers",
"following_url": "https://api.github.com/users/avacaondata/following{/other_user}",
"gists_url": "https://api.github.com/users/avacaondata/gists{/gist_id}",
"starred_url": "https://api.github.com/users/avacaondata/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avacaondata/subscriptions",
"organizations_url": "https://api.github.com/users/avacaondata/orgs",
"repos_url": "https://api.github.com/users/avacaondata/repos",
"events_url": "https://api.github.com/users/avacaondata/events{/privacy}",
"received_events_url": "https://api.github.com/users/avacaondata/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hmmm this may be because a TPU v3-32 regroups several TPU chips, as the error here seems to imply:\r\n```\r\nInternal: Invalid system configuration: 2x2 host topology with 0 missing hosts, but 1 hosts in total.\r\n```\r\n\r\n@sgugger can you confirm this is the source of the issue? Do you know the status of the Trainer/xla_spawn on TPU pods?",
"@LysandreJik If that's the source of the issue, what would be the procedure to solve it?",
"From the stack trace it doesn't look like it even gets to the training script, so I think there might be something wrong in your distributed TPU setup. Are you able to run another script (coming from official torch XLA for instance?) on this setup?",
"What do you mean by setup?? \r\n\r\n```{bash}\r\nXRT_TPU_CONFIG=\"tpu_worker;0;10.157.150.13:8470\"\r\n```\r\nThis is the configuration parameter I set before calling the training script, which starts like this:\r\n\r\n```{bash}\r\npython transformers/examples/xla_spawn.py --num_cores 8 \\\r\n transformers/examples/language-modeling/run_mlm.py \\\r\n```\r\n\r\nI also tried setting the --num_cores to 1 (it only accepts 1 or 8) in the second code snippet. \r\nWith v3-8 this setup works correctly, I don't know if you mean this by setup...",
"I mean the setup you are using. It can't be the same setup for one TPU (8cores) and a TPU pod (for the 32 cores). The second requires to launch different machines. That's why I was asking if you could run another example from someone else on your TPU v3-32.\r\n\r\nAlso, the launcher scrip `xla_spawn` only works for one TPU, not a TPU pod as fa as I know, so you will need to launch the script in a different way.",
"@sgugger does ```xla_spawn ``` not support TPU pod? As many issues in this repo are related to TPU pod, so I have thought ```xla_spawn``` also support it. Do you know any examples of using TPU pod?",
"Sorry I didn't write it well. I meant the launcher script `xla_spawn` has only been tested for one TPU, not a TPU pod as far as I know. So you may need to launch the script in a different way.\r\n\r\nI am not aware of anyone launching any of the example scripts on a TPU pod successfully, so I don't know if they work or not. ",
"I see.\r\nThat information should be added to the document if it does not.\r\n\r\nBy the way, TFTrainer support TPU pod? I think it does, but I have not tested yet.",
"Same thing, it has not been tested. We don't have resources setup to test for more than a single TPU (so 8 cores).",
"I understand. \r\nThank you for answering.\r\n",
"Okay, so I'd need to change the setup for a TPU pod then... I don't understand why all this complication to go from 8 cores to 32 cores actually, I know that's on Google's side, but I don't think it makes sense to complicate things so much to be able to train on a 32 cores TPU. As I understood, not only the setup must be changed, but also the script to launch the xla, right? I mean, the xla_spawn.py from Transformers is thought for 1 TPU, and it may crash on multiple TPU nodes?",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread.",
"I am having the same issue (xla_spawn works fine with 8 core TPU but fails with TPU pods v3-32 and above). Is there a way to utilize TPU pods with the transformers library? ",
"Hi @sgugger, I am running xla_spawn, can I know is this correct way to run the hugginface examples on TPU pods like v3-32? Thanks\r\n```\r\nTPU_NAME=tpu-v3-32\r\npython3 -m torch_xla.distributed.xla_dist \\\r\n --tpu=${TPU_NAME} --restart-tpuvm-pod-server -- \\\r\n python3 /transformers/examples/pytorch/xla_spawn.py --num_cores 8 /pytorch/text-classifications/run_glue.py \\\r\n --model_name_or_path bert-base-cased \\\r\n --dataset_name SetFit/mrpc \\\r\n --do_train \\\r\n --do_eval \\\r\n --max_seq_length 128 \\\r\n --per_device_train_batch_size 1 \\\r\n --per_device_eval_batch_size 32 \\\r\n --learning_rate 2e-5 \\\r\n --num_train_epochs 3 \\\r\n --run_name mnli_v3-32_bs-64_lr-2e-5-bert \\\r\n --output_dir /tmp/mrpc-bert/ \\\r\n --overwrite_output_dir\r\n``` \r\nWhere I use `python3 xla_dist` to wrap the `python3 xla_spawn.py --num_cores 8`",
"The examples can't be launched directly on TPU pods. cc @muellerzr who has worked on them with accelerate and can share how to run an example on a TPU pod.",
"Hi @sgugger Thanks:). I do successfully run above command on TPU pods (V3-32 and V4-64), see the [wandb results](https://wandb.ai/jianguozhang/huggingface/reports/mrpc-for-text-classification--VmlldzozMzU2NTE1?accessToken=hce0jseir4d3x32cqbocha2936xes1r1hbeupgijjy6l2lhujtd0y2577xzcdn2c). But i am not sure it whether it is correct way to use the commands as the training loss is much higher than that on GPUs, and V4-64 shows lower running speed than v3-32. \r\nHi @muellerzr, can you show an example that how to run huggingface torch_xla examples on TPU pods? Thanks:) "
] | 1,610 | 1,673 | 1,614 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.0.1
- Platform: Google Cloud debian-9-torch-xla-v20201215
- Python version: 3.7
- PyTorch version (GPU?): 1.7
- Tensorflow version (GPU?):
- Using GPU in script?: NO; using TPUS
- Using distributed or parallel set-up in script?:
### Who can help
@sgugger @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...):
I am using an Albert base.
The problem arises when using:
* [x] the official example scripts: (give details below)
Using the examples/xla_spawn.py together with run_mlm.py it crashes when we try to use it with v3-32. We're supposed to set num cores to either 1 or 8 but in our case we have 32 cores and it raises an error. We've also tried to let that variable to 1 or 8, but in both cases it raises errors:
```
.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www
.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www
.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www
.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www
.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www
.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www
.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www
.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www
.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
Exception in device=TPU:0: tensorflow/compiler/xla/xla_client/xrt_computation_client.cc:1229 : Check failed: session.Run({tensorflow::Output(result, 0)}, &outputs) == ::tensorflow::Status::OK() (Internal: From /job:tpu_worker/replica:0/t
ask:0:
2 root error(s) found.
(0) Internal: Invalid system configuration: 2x2 host topology with 0 missing hosts, but 1 hosts in total.
[[{{node configure_distributed_tpu/_0}}]]
[[ConfigureDistributedTPU_G3]]
(1) Internal: Invalid system configuration: 2x2 host topology with 0 missing hosts, but 1 hosts in total.
[[{{node configure_distributed_tpu/_0}}]]
0 successful operations.
0 derived errors ignored. vs. OK)
*** Begin stack trace ***
tensorflow::CurrentStackTrace()
xla::XrtComputationClient::InitializeAndFetchTopology(std::string const&, int, std::string const&, tensorflow::ConfigProto const&)
xla::XrtComputationClient::InitializeDevices(std::unique_ptr<tensorflow::tpu::TopologyProto, std::default_delete<tensorflow::tpu::TopologyProto> >)
xla::XrtComputationClient::XrtComputationClient(xla::XrtComputationClient::Options, std::unique_ptr<tensorflow::tpu::TopologyProto, std::default_delete<tensorflow::tpu::TopologyProto> >)
xla::ComputationClient::Create()
xla::ComputationClient::Get()
_PyCFunction_FastCallDict
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyEval_EvalCodeEx
PyObject_Call
_PyObject_GenericGetAttrWithDict
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyEval_EvalCodeEx
PyObject_Call
_PyEval_EvalFrameDefault
PyEval_EvalCodeEx
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyEval_EvalCodeEx
PyEval_EvalCode
PyRun_StringFlags
PyRun_SimpleStringFlags
Py_Main
main
__libc_start_main
*** End stack trace ***
Traceback (most recent call last):
File "transformers/examples/xla_spawn.py", line 85, in <module>
main()
File "transformers/examples/xla_spawn.py", line 81, in main
xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores)
File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 395, in spawn
start_method=start_method)
File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 157, in start_processes
while not context.join():
File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 112, in join
(error_index, exitcode)
Exception: process 0 terminated with exit code 17
```
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name): MLM
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Initialize a TPU V3-32 and when running xla_spawn.py, set the number of cores to either 32, 8 or 1. In all three cases it raises an error.
## Expected behavior
There should not be any problem in setting the number of cores to the number of TPU cores we actually have. It really does not make sense to be able to train only with either 1 core or 8 cores... | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9533/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9533/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9532 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9532/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9532/comments | https://api.github.com/repos/huggingface/transformers/issues/9532/events | https://github.com/huggingface/transformers/pull/9532 | 784,113,267 | MDExOlB1bGxSZXF1ZXN0NTUzMzU2MzYz | 9,532 | [Blenderbot] Fix Links | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Ok now's really the time to remove those hard-coded links once and for all, i think",
"Indeed! It's on my todo for the week."
] | 1,610 | 1,610 | 1,610 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #9527
Credit goes to @LysandreJik for finding the fix.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9532/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9532/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9532",
"html_url": "https://github.com/huggingface/transformers/pull/9532",
"diff_url": "https://github.com/huggingface/transformers/pull/9532.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9532.patch",
"merged_at": 1610448812000
} |
https://api.github.com/repos/huggingface/transformers/issues/9531 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9531/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9531/comments | https://api.github.com/repos/huggingface/transformers/issues/9531/events | https://github.com/huggingface/transformers/issues/9531 | 784,106,141 | MDU6SXNzdWU3ODQxMDYxNDE= | 9,531 | Seq2Seq include custom glossary/dictionary | {
"login": "codingnoobneedshelp",
"id": 39620284,
"node_id": "MDQ6VXNlcjM5NjIwMjg0",
"avatar_url": "https://avatars.githubusercontent.com/u/39620284?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/codingnoobneedshelp",
"html_url": "https://github.com/codingnoobneedshelp",
"followers_url": "https://api.github.com/users/codingnoobneedshelp/followers",
"following_url": "https://api.github.com/users/codingnoobneedshelp/following{/other_user}",
"gists_url": "https://api.github.com/users/codingnoobneedshelp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/codingnoobneedshelp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/codingnoobneedshelp/subscriptions",
"organizations_url": "https://api.github.com/users/codingnoobneedshelp/orgs",
"repos_url": "https://api.github.com/users/codingnoobneedshelp/repos",
"events_url": "https://api.github.com/users/codingnoobneedshelp/events{/privacy}",
"received_events_url": "https://api.github.com/users/codingnoobneedshelp/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi @codingnoobneedshelp \r\n\r\nNot sure what exactly this means,\r\n\r\nWhat do you mean by\r\n\r\n> custom glossary/dictionary\r\n\r\nand \r\n> ensure that specific words are always translated as they are in the glossary/dictionary.",
"Thanks for the answer. Let me try to clarify this. \r\nFrom Google: A glossary is a custom dictionary to consistently translate the customer's domain-specific terminology. This typically involves specifying how to translate a named entity. \r\n\r\nFor example, a Person name: \"Peter Eisen\" must translate to \"Peter Eisen.\" There are some cases where the model would translate this to \"Peter Iron\". \r\n\r\nSo I basically want to have a dictionary that tells the model that \"Peter Eisen\" should always be \"Peter Eisen\".\r\n\r\nDoes anyone know how I can archive that?\r\n\r\nThanks",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,610 | 1,614 | 1,614 | NONE | null | Hello,
Is it possible to include a custom glossary/dictionary while fine-tuning the Seq2Seq model for a specific domain?
So I basically want to ensure that specific words are always translated as they are in the glossary/dictionary.
Thanks for helping out.
coodingnoobneedshelp | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9531/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9531/timeline | completed | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.