url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
sequence
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/4212
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4212/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4212/comments
https://api.github.com/repos/huggingface/transformers/issues/4212/events
https://github.com/huggingface/transformers/issues/4212
614,220,806
MDU6SXNzdWU2MTQyMjA4MDY=
4,212
GPT2Tokenizer.decode() slow for individual tokens
{ "login": "Damiox", "id": 599804, "node_id": "MDQ6VXNlcjU5OTgwNA==", "avatar_url": "https://avatars.githubusercontent.com/u/599804?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Damiox", "html_url": "https://github.com/Damiox", "followers_url": "https://api.github.com/users/Damiox/followers", "following_url": "https://api.github.com/users/Damiox/following{/other_user}", "gists_url": "https://api.github.com/users/Damiox/gists{/gist_id}", "starred_url": "https://api.github.com/users/Damiox/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Damiox/subscriptions", "organizations_url": "https://api.github.com/users/Damiox/orgs", "repos_url": "https://api.github.com/users/Damiox/repos", "events_url": "https://api.github.com/users/Damiox/events{/privacy}", "received_events_url": "https://api.github.com/users/Damiox/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,588
1,594
1,594
NONE
null
`GPT2Tokenizer.decode()` is considerable slow when invoking it for each token individually, not for a set of tokens... I need to pass in a set of tokens and get back a set of strings for each token. I tried convert_ids_to_tokens() but I'm getting results with the Ġ character. Should I simply remove that character from the results? I also tried `GPT2TokenizerFast`, but it failed with `"encoder() takes from 2 to 3 positional arguments but 6 were given"` when I was trying to use batch_encode_plus() to pad my tokens with the following arguments: tokens (that were already encoded), pad_to_max_length=True and return_tensors='pt' . transformers ver 2.6.0 - is this a bug?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4212/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4212/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4211
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4211/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4211/comments
https://api.github.com/repos/huggingface/transformers/issues/4211/events
https://github.com/huggingface/transformers/pull/4211
614,204,002
MDExOlB1bGxSZXF1ZXN0NDE0ODA2MjQ2
4,211
W&B integration improvements
{ "login": "vanpelt", "id": 17, "node_id": "MDQ6VXNlcjE3", "avatar_url": "https://avatars.githubusercontent.com/u/17?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vanpelt", "html_url": "https://github.com/vanpelt", "followers_url": "https://api.github.com/users/vanpelt/followers", "following_url": "https://api.github.com/users/vanpelt/following{/other_user}", "gists_url": "https://api.github.com/users/vanpelt/gists{/gist_id}", "starred_url": "https://api.github.com/users/vanpelt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vanpelt/subscriptions", "organizations_url": "https://api.github.com/users/vanpelt/orgs", "repos_url": "https://api.github.com/users/vanpelt/repos", "events_url": "https://api.github.com/users/vanpelt/events{/privacy}", "received_events_url": "https://api.github.com/users/vanpelt/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "^^ I borked your PR unfortunately (💩). Sorry about that. Care to open a new one?" ]
1,588
1,588
1,588
CONTRIBUTOR
null
Hey guys, I'm one of the founders of W&B and have been working with @borisdayma on the wandb integration. I made a few changes to address a few cases we thought through: 1. If wandb is installed but the user hasn't logged in we now print a warning instead of prompting for an api_key 2. Made messaging more clear on how to install wandb and linked to more documentation 3. Added an environment variable for disabling wandb. This is important for users using fastai or lightning with their own wandb callback. 4. Added an environment variable for disabling wandb.watch. Because we're pulling gradients and parameters during training there is a small performance impact on training. We wanted to make it easy for users to disable this feature. Happy to iterate on any feedback you guys have. Really excited about this integration!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4211/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4211/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4211", "html_url": "https://github.com/huggingface/transformers/pull/4211", "diff_url": "https://github.com/huggingface/transformers/pull/4211.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4211.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/4210
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4210/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4210/comments
https://api.github.com/repos/huggingface/transformers/issues/4210/events
https://github.com/huggingface/transformers/issues/4210
614,202,590
MDU6SXNzdWU2MTQyMDI1OTA=
4,210
Perplexity in T5/BART
{ "login": "Palipoor", "id": 16380397, "node_id": "MDQ6VXNlcjE2MzgwMzk3", "avatar_url": "https://avatars.githubusercontent.com/u/16380397?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Palipoor", "html_url": "https://github.com/Palipoor", "followers_url": "https://api.github.com/users/Palipoor/followers", "following_url": "https://api.github.com/users/Palipoor/following{/other_user}", "gists_url": "https://api.github.com/users/Palipoor/gists{/gist_id}", "starred_url": "https://api.github.com/users/Palipoor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Palipoor/subscriptions", "organizations_url": "https://api.github.com/users/Palipoor/orgs", "repos_url": "https://api.github.com/users/Palipoor/repos", "events_url": "https://api.github.com/users/Palipoor/events{/privacy}", "received_events_url": "https://api.github.com/users/Palipoor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "T5 uses CrossEntropy loss so I think you can use `torch.exp(loss)` to calculate perplexity.", "Thanks!", "@patil-suraj Sorry to bother you again, I thought my problem has been solved, but I don't know how to deal with the loss. It's a (1,1,vocab_size) tensor. I don't even have an intuition about what that means. Can you help me?", "It seems that you are not passing `lm_labels`. When you pass `lm_labels` to T5/BART's forward methods then the first element of the output tuple is the loss, otherwise its `prediction_scores` tensor which you are receiving.", "hi @patil-suraj bart doesnt have lm_labels has argument", "`lm_labels` is renamed to `labels`", "Thanks for the prompt response, @patil-suraj , previously before trying Bart, I trained it with t5,the valid loss gets to almost zero, but still found it difficult generating correct translation. I would love to know reasons for that. Thanks", "I'm not sure which problem you are talking about, hard to answer without knowing the context.", "Hi patil,\n\nI am working on a translation task/competition, so as a text2text\nframework, I used t5 even though it is not pretrained on my source language\njust for the knowledge sake,\nThe notebook link here\n\nhttps://colab.research.google.com/drive/1w0brh9KULV4zDmgMme4csXeCpfAR960N?usp=sharing\n\nthe test and valid loss was declining to zero even though the later\ngenerated predictions wasn't good so I changed to Bart even though no\nimprovement, I have no idea why? I would love to know the reason.\n\nI found MarianMT just yesterday which was pretrained for my task yo-en, I\nhave started working with it. I think it would also be the case for MarianMt\nThank you.\n\nOn Fri, Jan 1, 2021, 8:38 AM Suraj Patil <[email protected]> wrote:\n\n> I'm not sure which problem you are talking about, hard to answer without\n> knowing the context.\n>\n> —\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/4210#issuecomment-753280527>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AOP3ZJFLJLLFGUSFCU3WKB3SXV3X7ANCNFSM4M3PUPXA>\n> .\n>\n" ]
1,588
1,609
1,588
NONE
null
Hi, Is there a way to compute perplexity for T5 or BART outputs?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4210/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4210/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4209
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4209/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4209/comments
https://api.github.com/repos/huggingface/transformers/issues/4209/events
https://github.com/huggingface/transformers/pull/4209
614,178,429
MDExOlB1bGxSZXF1ZXN0NDE0Nzg1MDMz
4,209
Fix for IndexError when Roberta Tokenizer is called on empty text
{ "login": "malteos", "id": 7977470, "node_id": "MDQ6VXNlcjc5Nzc0NzA=", "avatar_url": "https://avatars.githubusercontent.com/u/7977470?v=4", "gravatar_id": "", "url": "https://api.github.com/users/malteos", "html_url": "https://github.com/malteos", "followers_url": "https://api.github.com/users/malteos/followers", "following_url": "https://api.github.com/users/malteos/following{/other_user}", "gists_url": "https://api.github.com/users/malteos/gists{/gist_id}", "starred_url": "https://api.github.com/users/malteos/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/malteos/subscriptions", "organizations_url": "https://api.github.com/users/malteos/orgs", "repos_url": "https://api.github.com/users/malteos/repos", "events_url": "https://api.github.com/users/malteos/events{/privacy}", "received_events_url": "https://api.github.com/users/malteos/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,588
1,592
1,592
CONTRIBUTOR
null
Check if `text` is set to avoid IndexError Fix for https://github.com/huggingface/transformers/issues/3809
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4209/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4209/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4209", "html_url": "https://github.com/huggingface/transformers/pull/4209", "diff_url": "https://github.com/huggingface/transformers/pull/4209.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4209.patch", "merged_at": 1592838546000 }
https://api.github.com/repos/huggingface/transformers/issues/4208
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4208/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4208/comments
https://api.github.com/repos/huggingface/transformers/issues/4208/events
https://github.com/huggingface/transformers/pull/4208
614,157,176
MDExOlB1bGxSZXF1ZXN0NDE0NzY3ODE2
4,208
Remove comma that turns int into tuple
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4208?src=pr&el=h1) Report\n> Merging [#4208](https://codecov.io/gh/huggingface/transformers/pull/4208?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cafa6a9e29f3e99c67a1028f8ca779d439bc0689&el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4208/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4208?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4208 +/- ##\n==========================================\n- Coverage 78.32% 78.31% -0.02% \n==========================================\n Files 120 120 \n Lines 19854 19854 \n==========================================\n- Hits 15551 15548 -3 \n- Misses 4303 4306 +3 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4208?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4208/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.88% <ø> (-0.30%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4208/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.96% <0.00%> (-0.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4208/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.96% <0.00%> (-0.13%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4208?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4208?src=pr&el=footer). Last update [cafa6a9...bfbed71](https://codecov.io/gh/huggingface/transformers/pull/4208?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Superseded and fixed by #4213 " ]
1,588
1,588
1,588
MEMBER
null
cf @stefan-it's error on Slack
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4208/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4208/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4208", "html_url": "https://github.com/huggingface/transformers/pull/4208", "diff_url": "https://github.com/huggingface/transformers/pull/4208.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4208.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/4207
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4207/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4207/comments
https://api.github.com/repos/huggingface/transformers/issues/4207/events
https://github.com/huggingface/transformers/issues/4207
614,151,196
MDU6SXNzdWU2MTQxNTExOTY=
4,207
BertForTokenClassification, logits on sequence tagging task
{ "login": "mbkrafft", "id": 28674398, "node_id": "MDQ6VXNlcjI4Njc0Mzk4", "avatar_url": "https://avatars.githubusercontent.com/u/28674398?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mbkrafft", "html_url": "https://github.com/mbkrafft", "followers_url": "https://api.github.com/users/mbkrafft/followers", "following_url": "https://api.github.com/users/mbkrafft/following{/other_user}", "gists_url": "https://api.github.com/users/mbkrafft/gists{/gist_id}", "starred_url": "https://api.github.com/users/mbkrafft/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mbkrafft/subscriptions", "organizations_url": "https://api.github.com/users/mbkrafft/orgs", "repos_url": "https://api.github.com/users/mbkrafft/repos", "events_url": "https://api.github.com/users/mbkrafft/events{/privacy}", "received_events_url": "https://api.github.com/users/mbkrafft/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,588
1,594
1,594
NONE
null
# ❓ Questions & Help ## Details <!-- Description of your issue --> I'm trying to use BertForTokenClassification to do a sequence tagging task. During training, everything seems fine and loss is decreasing per epoch, but when I put the model in evaluation mode and feed it inputs, the logits for each token are all the same. My code looks like this: ``` class PretrainedBert(nn.Module): def __init__(self, device, pretrained_path='bert-base-multilingual-cased', output_dim=5, learning_rate=0.01, eps=1e-8, max_grad_norm=1.0, model_name='bert_multilingual', cased=True ): super(PretrainedBert, self).__init__() self.device = device self.output_dim = output_dim self.learning_rate = learning_rate self.max_grad_norm = max_grad_norm self.model_name = model_name self.cased = cased # Load pretrained bert self.bert = BertForTokenClassification.from_pretrained(pretrained_path, num_labels=output_dim, output_attentions = False, output_hidden_states = False) self.tokenizer = BertTokenizer.from_pretrained(pretrained_path, do_lower_case=not cased) param_optimizer = list(self.bert.named_parameters()) no_decay = ['bias', 'gamma', 'beta'] optimizer_grouped_parameters = [ {'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)], 'weight_decay_rate': 0.01}, {'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], 'weight_decay_rate': 0.0}] self.optimizer = AdamW(optimizer_grouped_parameters, lr=learning_rate, eps=eps) def forward(self, input_ids, attention_mask, labels=None): output = self.bert(input_ids, attention_mask=attention_mask, labels=labels) return output def fit(self, train_loader, dev_loader, epochs=10): """ :param test_loader: torch.utils.data.DataLoader object :param test_loader: torch.utils.data.DataLoader object :param epochs: int """ scheduler = get_linear_schedule_with_warmup(self.optimizer, num_warmup_steps=0,num_training_steps=len(train_loader)*epochs) for epoch in range(epochs): # Iterate over training data epoch_time = time.time() self.train() epoch_loss = 0 num_batches = 0 for raw_text, x, y, idx in tqdm(train_loader): self.zero_grad() # Get gold labels and extend them to cover the wordpieces created by bert y = self._extend_labels(raw_text, y) y, _ = pad_packed_sequence(y, batch_first=True) batches_len, seq_length = y.size() y.to(self.device) # Run input through bert and get logits bert_tokenized = self.tokenizer.batch_encode_plus([' '.join( text) for text in raw_text], max_length=seq_length, pad_to_max_length=True) input_ids, token_type_ids, attention_masks = bert_tokenized[ 'input_ids'], bert_tokenized['token_type_ids'], bert_tokenized['attention_mask'] loss, logits = self.forward(torch.LongTensor( input_ids), torch.LongTensor(attention_masks), labels=y) # Help prevent the "exploding gradients" problem. torch.nn.utils.clip_grad_norm_(parameters=self.bert.parameters(), max_norm=self.max_grad_norm) # update parameters and learning rate loss.backward() self.optimizer.step() scheduler.step() epoch_loss += loss.item() num_batches += 1 epoch_time = time.time() - epoch_time print() print("Epoch {0} loss: {1:.3f}".format(epoch + 1, epoch_loss / num_batches)) print("Dev") binary_f1, propor_f1 = self.evaluate(dev_loader) def predict(self, test_loader): """ :param test_loader: torch.utils.data.DataLoader object with batch_size=1 """ self.eval() predictions, golds, sents = [], [], [] for raw_text, x, y, idx in tqdm(test_loader): # Run input through bert and get logits bert_tokenized = self.tokenizer.encode_plus(' '.join(raw_text[0])) input_ids, token_type_ids, attention_masks = bert_tokenized[ 'input_ids'], bert_tokenized['token_type_ids'], bert_tokenized['attention_mask'] with torch.no_grad(): logits = self.forward(torch.LongTensor( input_ids).unsqueeze(0), torch.LongTensor(attention_masks).unsqueeze(0)) # remove batch dim and [CLS]+[SEP] tokens from logits logits = logits[0].squeeze(0)[1:-1:] # mean pool wordpiece rows of logits and argmax to get predictions preds = self._logits_to_preds(logits, y[0], raw_text[0]) predictions.append(preds) golds.append(y[0]) sents.append(raw_text[0]) return predictions, golds, sents def evaluate(self, test_loader): """ Returns the binary and proportional F1 scores of the model on the examples passed via test_loader. :param test_loader: torch.utils.data.DataLoader object with batch_size=1 """ preds, golds, sents = self.predict(test_loader) flat_preds = [int(i) for l in preds for i in l] flat_golds = [int(i) for l in golds for i in l] analysis = get_analysis(sents, preds, golds) binary_f1 = binary_analysis(analysis) propor_f1 = proportional_analysis(flat_golds, flat_preds) return binary_f1, propor_f1 def _extend_labels(self, text, labels): extended_labels = [] for idx, (text_seq, label_seq) in enumerate(zip(text, labels)): extended_seq_labels = [] for word, label in zip(text_seq, label_seq): n_subwords = len(self.tokenizer.tokenize(word)) extended_seq_labels.extend([label.item()] * n_subwords) extended_labels.append(torch.LongTensor(extended_seq_labels)) extended_labels = pack_sequence(extended_labels, enforce_sorted=False) return extended_labels def _logits_to_preds(self, logits, y, raw_text): preds = torch.zeros(len(y)) head = 0 for idx, word in enumerate(raw_text): n_subwords = len(self.tokenizer.tokenize(word)) if n_subwords > 1: preds[idx] = torch.argmax(torch.mean( logits[head:head+n_subwords, :], dim=0)) else: preds[idx] = torch.argmax(logits[head, :]) head = head + n_subwords return preds ``` Example input: ``` raw_text = ['Donkey', 'Kong', 'Country', ':'] y = [tensor([0, 0, 0, 0])] ``` Produces these results: ``` tokenized_words = ['Don', '##key', 'Kong', 'Country', ':'] logits = tensor([[-3.2811, 1.1715, 1.2381, 0.5201, 1.0921], [-3.2813, 1.1715, 1.2382, 0.5201, 1.0922], [-3.2815, 1.1716, 1.2383, 0.5202, 1.0923], [-3.2814, 1.1716, 1.2383, 0.5202, 1.0922], [-3.2811, 1.1715, 1.2381, 0.5201, 1.0921]]) preds = tensor([2., 2., 2., 2.]) ``` Any ideas for what I'm doing wrong? Thanks for any help in advance.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4207/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4207/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4206
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4206/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4206/comments
https://api.github.com/repos/huggingface/transformers/issues/4206/events
https://github.com/huggingface/transformers/issues/4206
614,136,128
MDU6SXNzdWU2MTQxMzYxMjg=
4,206
How to get output from BERTNextSentencePrediction by passing One Sentence and Next Sentence getting predicted
{ "login": "ali4friends71", "id": 40916262, "node_id": "MDQ6VXNlcjQwOTE2MjYy", "avatar_url": "https://avatars.githubusercontent.com/u/40916262?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ali4friends71", "html_url": "https://github.com/ali4friends71", "followers_url": "https://api.github.com/users/ali4friends71/followers", "following_url": "https://api.github.com/users/ali4friends71/following{/other_user}", "gists_url": "https://api.github.com/users/ali4friends71/gists{/gist_id}", "starred_url": "https://api.github.com/users/ali4friends71/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ali4friends71/subscriptions", "organizations_url": "https://api.github.com/users/ali4friends71/orgs", "repos_url": "https://api.github.com/users/ali4friends71/repos", "events_url": "https://api.github.com/users/ali4friends71/events{/privacy}", "received_events_url": "https://api.github.com/users/ali4friends71/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,588
1,594
1,594
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details How to get output from BERTNextSentencePrediction by passing One Sentence and Next Sentence getting predicted ? I got the Prediction score but how and where should I pass the first sentence so I can get the next sentence predicted ? Codefrom torch.nn.functional import softmax from transformers import BertForNextSentencePrediction, BertTokenizer seq_A = 'I like cookies !' seq_B = 'Do you like them ?' # load pretrained model and a pretrained tokenizer model = BertForNextSentencePrediction.from_pretrained('bert-base-cased') tokenizer = BertTokenizer.from_pretrained('bert-base-cased') # encode the two sequences. Particularly, make clear that they must be # encoded as "one" input to the model by using 'seq_B' as the 'text_pair' encoded = tokenizer.encode_plus(seq_A, text_pair=seq_B, return_tensors='pt') print(encoded) # {'input_ids': tensor([[ 101, 146, 1176, 18621, 106, 102, 2091, 1128, 1176, 1172, 136, 102]]), # 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1]]), # 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])} # NOTE how the token_type_ids are 0 for all tokens in seq_A and 1 for seq_B, # this way the model knows which token belongs to which sequence # a model's output is a tuple, we only need the output tensor containing # the relationships which is the first item in the tuple seq_relationship_logits = model(**encoded)[0] # we still need softmax to convert the logits into probabilities # index 0: sequence B is a continuation of sequence A # index 1: sequence B is a random sequence probs = softmax(seq_relationship_logits, dim=1) print(seq_relationship_logits) print(probs) # tensor([[9.9993e-01, 6.7607e-05]], grad_fn=<SoftmaxBackward>) # very high value for index 0: high probability of seq_B being a continuation of seq_A <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4206/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4206/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4205
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4205/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4205/comments
https://api.github.com/repos/huggingface/transformers/issues/4205/events
https://github.com/huggingface/transformers/issues/4205
614,132,175
MDU6SXNzdWU2MTQxMzIxNzU=
4,205
Issues with RobertaTokenizer and unicode characters
{ "login": "yanneyanne", "id": 15113131, "node_id": "MDQ6VXNlcjE1MTEzMTMx", "avatar_url": "https://avatars.githubusercontent.com/u/15113131?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yanneyanne", "html_url": "https://github.com/yanneyanne", "followers_url": "https://api.github.com/users/yanneyanne/followers", "following_url": "https://api.github.com/users/yanneyanne/following{/other_user}", "gists_url": "https://api.github.com/users/yanneyanne/gists{/gist_id}", "starred_url": "https://api.github.com/users/yanneyanne/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanneyanne/subscriptions", "organizations_url": "https://api.github.com/users/yanneyanne/orgs", "repos_url": "https://api.github.com/users/yanneyanne/repos", "events_url": "https://api.github.com/users/yanneyanne/events{/privacy}", "received_events_url": "https://api.github.com/users/yanneyanne/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,588
1,595
1,595
NONE
null
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): Roberta Language I am using the model on (English, Chinese ...): English ## To reproduce Steps to reproduce the behavior: 1. Insert a jumble of some weird unicode characters into the vocabulary file. These unicode characters must exist on their own somewhere else in the vocab. (Note: I didn't actually do this manual vocab editing when I encountered the bug. Since I'm using a custom vocabulary I'm simply detailing the steps to reproduce the behavior) ``` # vocab.txt ... ĠOHâ ... ``` (Let's say this was inserted on line 1001) (Don't forget to account for this extra word with a change in vocab size in your config file) 2. Load an instance of `RobertaTokenizer` with this vocab file. 3. Call ```python3 input_ids = tokenizer.encode_plus('OHâ', add_special_tokens=True, add_prefix_space=True, max_length=max_seq_length, return_token_type_ids=True)['input_ids'] print(input_ids) ``` ## Expected behavior The special start-of-word character `Ġ` will be added by the tokenizer, so the input ids we'd expect would be `[0, 1000, 2]` (given that `0` and `2` correspond to your start/end tokens, and `1000` corresponds to the index where we inserted the unicode jumble `ĠOHâ`). Instead, I'm getting some seemingly unrelated input ids corresponding to other weird unicode characters not included in the inserted unicode jumble. ## Environment info - `transformers` version: 2.8 - Platform: Linux and MacOS - Python version: 3.7 - PyTorch version (GPU?): torch 1.4. - Using GPU in script?: No - Using distributed or parallel set-up in script?: No
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4205/reactions", "total_count": 5, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4205/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4204
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4204/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4204/comments
https://api.github.com/repos/huggingface/transformers/issues/4204/events
https://github.com/huggingface/transformers/issues/4204
614,117,788
MDU6SXNzdWU2MTQxMTc3ODg=
4,204
KeyError with a fine-tuned model
{ "login": "tbabinet", "id": 37694371, "node_id": "MDQ6VXNlcjM3Njk0Mzcx", "avatar_url": "https://avatars.githubusercontent.com/u/37694371?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tbabinet", "html_url": "https://github.com/tbabinet", "followers_url": "https://api.github.com/users/tbabinet/followers", "following_url": "https://api.github.com/users/tbabinet/following{/other_user}", "gists_url": "https://api.github.com/users/tbabinet/gists{/gist_id}", "starred_url": "https://api.github.com/users/tbabinet/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tbabinet/subscriptions", "organizations_url": "https://api.github.com/users/tbabinet/orgs", "repos_url": "https://api.github.com/users/tbabinet/repos", "events_url": "https://api.github.com/users/tbabinet/events{/privacy}", "received_events_url": "https://api.github.com/users/tbabinet/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "How many labels/entities are there in your model ? Can you check if the key 422 exists in the models config's id2label by printing it or by opening the config file. You can print config using `model.config` ", "Hello\r\nIt seems I had simply been misusing my model : I was instantiating an `AutoModel`, when I should have been using the `BertForTokenClassification` class. \r\nThanks anyway ! :) " ]
1,588
1,588
1,588
NONE
null
I've fine-tuned the allenai's scibert to do some NER using the `run_ner.py` script, but I keep bumping into a key error. My code is the following: ``` m = AutoModel.from_pretrained("models/sciBERT_ner_e50/") tok = AutoTokenizer.from_pretrained("allenai/scibert_scivocab_uncased") nlp = pipeline("ner", model=m, tokenizer=tok) ``` where models/scibert_ner_e50 contains my model and the files it needs to run properly. when i run something like `nlp(abstract)`, the code crashes and sends me the following error: ``` KeyError Traceback (most recent call last) <ipython-input-86-793f89d86633> in <module> 3 if(len(row.abstract.split())<512): 4 abst = row.abstract ----> 5 print(nlp(abst)) 6 7 except ValueError as ve: c:\users\tbabi\appdata\local\programs\python\python37\lib\site-packages\transformers\pipelines.py in __call__(self, *texts, **kwargs) 792 answer = [] 793 for idx, label_idx in enumerate(labels_idx): --> 794 if self.model.config.id2label[label_idx] not in self.ignore_labels: 795 answer += [ 796 { KeyError: 422 ``` Has anyone else been meeting this error ? I've been struggling with this for quiet some time now, and any advice/thoughts/insights on what's going on would be greatly appreciated :) Thank you !
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4204/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4204/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4203
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4203/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4203/comments
https://api.github.com/repos/huggingface/transformers/issues/4203/events
https://github.com/huggingface/transformers/pull/4203
614,109,244
MDExOlB1bGxSZXF1ZXN0NDE0NzI4ODM5
4,203
Use with_suffix to change the extension of the path
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4203?src=pr&el=h1) Report\n> Merging [#4203](https://codecov.io/gh/huggingface/transformers/pull/4203?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/026097b9ee7862905ec3f3b0e729c5dbc95a0fd9&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `0.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4203/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4203?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4203 +/- ##\n==========================================\n+ Coverage 78.43% 78.44% +0.01% \n==========================================\n Files 120 120 \n Lines 19800 19797 -3 \n==========================================\n Hits 15530 15530 \n+ Misses 4270 4267 -3 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4203?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/hf\\_argparser.py](https://codecov.io/gh/huggingface/transformers/pull/4203/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9oZl9hcmdwYXJzZXIucHk=) | `61.97% <0.00%> (+1.16%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4203/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `41.83% <0.00%> (-0.35%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4203/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.77% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4203/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `74.38% <0.00%> (+0.41%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4203?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4203?src=pr&el=footer). Last update [026097b...a73d487](https://codecov.io/gh/huggingface/transformers/pull/4203?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Thanks!" ]
1,588
1,588
1,588
COLLABORATOR
null
As per https://github.com/huggingface/transformers/pull/3934#discussion_r421307659. Note that this does not rename the file, it simply changes the path 'string' to another one with a different suffix.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4203/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4203/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4203", "html_url": "https://github.com/huggingface/transformers/pull/4203", "diff_url": "https://github.com/huggingface/transformers/pull/4203.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4203.patch", "merged_at": 1588864497000 }
https://api.github.com/repos/huggingface/transformers/issues/4202
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4202/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4202/comments
https://api.github.com/repos/huggingface/transformers/issues/4202/events
https://github.com/huggingface/transformers/pull/4202
614,085,704
MDExOlB1bGxSZXF1ZXN0NDE0NzA5NTc5
4,202
Create README.md
{ "login": "savasy", "id": 6584825, "node_id": "MDQ6VXNlcjY1ODQ4MjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6584825?v=4", "gravatar_id": "", "url": "https://api.github.com/users/savasy", "html_url": "https://github.com/savasy", "followers_url": "https://api.github.com/users/savasy/followers", "following_url": "https://api.github.com/users/savasy/following{/other_user}", "gists_url": "https://api.github.com/users/savasy/gists{/gist_id}", "starred_url": "https://api.github.com/users/savasy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/savasy/subscriptions", "organizations_url": "https://api.github.com/users/savasy/orgs", "repos_url": "https://api.github.com/users/savasy/repos", "events_url": "https://api.github.com/users/savasy/events{/privacy}", "received_events_url": "https://api.github.com/users/savasy/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,588
1,588
1,588
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4202/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4202/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4202", "html_url": "https://github.com/huggingface/transformers/pull/4202", "diff_url": "https://github.com/huggingface/transformers/pull/4202.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4202.patch", "merged_at": 1588890683000 }
https://api.github.com/repos/huggingface/transformers/issues/4201
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4201/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4201/comments
https://api.github.com/repos/huggingface/transformers/issues/4201/events
https://github.com/huggingface/transformers/pull/4201
614,082,670
MDExOlB1bGxSZXF1ZXN0NDE0NzA3MDM4
4,201
Ensure fast tokenizer can construct single-element tensor without pad token
{ "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,588
1,588
1,588
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4201/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4201/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4201", "html_url": "https://github.com/huggingface/transformers/pull/4201", "diff_url": "https://github.com/huggingface/transformers/pull/4201.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4201.patch", "merged_at": 1588860174000 }
https://api.github.com/repos/huggingface/transformers/issues/4200
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4200/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4200/comments
https://api.github.com/repos/huggingface/transformers/issues/4200/events
https://github.com/huggingface/transformers/issues/4200
614,079,056
MDU6SXNzdWU2MTQwNzkwNTY=
4,200
[Proposal] Small HfAPI Cleanup
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,588
1,594
1,594
CONTRIBUTOR
null
- Implement __dict__ and __str__ so that one need not guess the class attributes of `ModelInfo` and `S3Object` - Delete redundant `S3Obj` class. (`S3Object` seems to be a superset). - Rename to `HFApi`? Thoughts @julien-c @LysandreJik ?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4200/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4200/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4199
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4199/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4199/comments
https://api.github.com/repos/huggingface/transformers/issues/4199/events
https://github.com/huggingface/transformers/pull/4199
614,058,964
MDExOlB1bGxSZXF1ZXN0NDE0Njg3NTQ0
4,199
[README] Corrected some grammatical mistakes
{ "login": "girishponkiya", "id": 2093282, "node_id": "MDQ6VXNlcjIwOTMyODI=", "avatar_url": "https://avatars.githubusercontent.com/u/2093282?v=4", "gravatar_id": "", "url": "https://api.github.com/users/girishponkiya", "html_url": "https://github.com/girishponkiya", "followers_url": "https://api.github.com/users/girishponkiya/followers", "following_url": "https://api.github.com/users/girishponkiya/following{/other_user}", "gists_url": "https://api.github.com/users/girishponkiya/gists{/gist_id}", "starred_url": "https://api.github.com/users/girishponkiya/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/girishponkiya/subscriptions", "organizations_url": "https://api.github.com/users/girishponkiya/orgs", "repos_url": "https://api.github.com/users/girishponkiya/repos", "events_url": "https://api.github.com/users/girishponkiya/events{/privacy}", "received_events_url": "https://api.github.com/users/girishponkiya/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for your contribution @girishponkiya , merging!" ]
1,588
1,589
1,589
CONTRIBUTOR
null
Corrected some grammatical mistakes, like added required punctuation marks, corrected singular-plural mistakes, etc.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4199/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4199/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4199", "html_url": "https://github.com/huggingface/transformers/pull/4199", "diff_url": "https://github.com/huggingface/transformers/pull/4199.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4199.patch", "merged_at": 1589115757000 }
https://api.github.com/repos/huggingface/transformers/issues/4198
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4198/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4198/comments
https://api.github.com/repos/huggingface/transformers/issues/4198/events
https://github.com/huggingface/transformers/pull/4198
614,050,271
MDExOlB1bGxSZXF1ZXN0NDE0NjgwMzQ1
4,198
[Benchmark] Memory benchmark utils
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This google colab shows nicely how the new benchmarking utils can be used: https://colab.research.google.com/drive/1GM3WPXmZ5wZHwhdPyf7RykJS3Y12iceJ?usp=sharing", "A plotter file is included in `examples/benchmarking` which can give the following results:\r\n\r\n### Memory\r\n\r\n![memory_example](https://user-images.githubusercontent.com/23423619/82815931-8020b200-9e9a-11ea-99ad-d22517329bf1.png)\r\n\r\n### Time\r\n\r\n![time_example](https://user-images.githubusercontent.com/23423619/82815942-84e56600-9e9a-11ea-95b4-f9bc9b7a9511.png)\r\n", "Can `csv_memory_filename_inference` be called `save_csv_path`?", "> Can `csv_memory_filename_inference` be called `save_csv_path`?\r\n\r\nYeah about the exact naming I think we can decide a bit later. Wanna see first if you guys are ok with the general design. But we have up to four csv files that are saved - so `save_csv_path` is too general.", "> This is really cool, I'm looking forward to using it.\r\n> \r\n> It seems you've implemented TPU support as well, did you get a chance to try it?\r\n\r\nI didn't play around with TPU very much yet. I'd like to merge this PR which works for GPU / CPU for PyTorch now and then maybe clean things for TPU in a second PR? ", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4198?src=pr&el=h1) Report\n> Merging [#4198](https://codecov.io/gh/huggingface/transformers/pull/4198?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4bd7cfc30952922b6329ec2d7637f13729317014&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4198/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4198?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4198 +/- ##\n=======================================\n Coverage 77.39% 77.39% \n=======================================\n Files 128 128 \n Lines 20939 20939 \n=======================================\n Hits 16205 16205 \n Misses 4734 4734 \n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4198?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4198?src=pr&el=footer). Last update [4bd7cfc...4bd7cfc](https://codecov.io/gh/huggingface/transformers/pull/4198?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,588
1,590
1,590
MEMBER
null
## Description Currently, we have a benchmarks file in examples `examples/benchmarks.py` which measures time and speed for inference and one `src/transformers/benchmark_utils.py` file which including the memory tracing from @thomwolf. This PR generalizes the benchmarks much more: - We have `benchmark_args_utils, benchmark_args, benchmark_tf_args` similar to `trainer_args`. Because there is such a big overlap between tf and torch, I decided to have a`benchmark_args_utils` - We have a `benchmark_utils` including the previous code of `benchmark_utils` and a new abstract`Benchmark` class. `PyTorchBenchmark` and `TensorflowBenchmark` inherit from this class and implement the abstract methods `inference` and `train`. - Instead of being able to just measure our "default" models identified by their `model_name`, we can create a special config and measure the memory and time usage of the corresponding model before starting to train - In addition to inference we can also measure train - **IMPORTANT**: The line-by-line tracing implemented by @thomwolf is super useful to optimize one's code and see exactly which line produces most of the code, but it does not give the peek memory usage as one can see in `nvidia-smi` and which is relevant to decide how much RAM is needed for the model. I decided to not use the line-by-line tracing to plot the overall memory usage on GPU. It should rather only be used when one want to optimize his/her code instead of benchmarking the model. For Benchmarks the PyTorch method `torch.cuda.max_memory_reserved` is a better fit - it's much faster and gives the correct value. - Also I added a `run_script` and a `plot_script` to the examples. - @julien-c - I think it would be super nice to have such plots on our website somewhere. If people can see quickly how much more efficient model x is over y in for a given batch size and sequence length, I think it would be super useful. ## TODO: - [x] Write tests - [x] Make pretty (more type hints + smaller refactoring) ## Review @thomwolf, @julien-c - It would be awesome if you could take a look here and check if you are ok with the general design choice. To get a better feeling how the Benchmark can be used, you can checkout this notebook: https://colab.research.google.com/drive/1GM3WPXmZ5wZHwhdPyf7RykJS3Y12iceJ?usp=sharing ## UPDATE I think the PR is ready for merge now. The notebook: https://colab.research.google.com/drive/1GM3WPXmZ5wZHwhdPyf7RykJS3Y12iceJ?usp=sharing is updated and shows what environment and version infos are shown @thomwolf . Tests are included. The next PR should take care of pytorch jit tracing and TPU and TF @LysandreJik @julien-c ## Future PR: - [ ] Support Pytorch Tracing (discuss with Lysandre a bit before) - [ ] Support TPU (discuss with Lysandre a bit before) - [ ] TF version
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4198/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4198/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4198", "html_url": "https://github.com/huggingface/transformers/pull/4198", "diff_url": "https://github.com/huggingface/transformers/pull/4198.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4198.patch", "merged_at": 1590614537000 }
https://api.github.com/repos/huggingface/transformers/issues/4197
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4197/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4197/comments
https://api.github.com/repos/huggingface/transformers/issues/4197/events
https://github.com/huggingface/transformers/issues/4197
614,044,933
MDU6SXNzdWU2MTQwNDQ5MzM=
4,197
AutoTokenizer not able to load saved Roberta Tokenizer
{ "login": "noncuro", "id": 5695500, "node_id": "MDQ6VXNlcjU2OTU1MDA=", "avatar_url": "https://avatars.githubusercontent.com/u/5695500?v=4", "gravatar_id": "", "url": "https://api.github.com/users/noncuro", "html_url": "https://github.com/noncuro", "followers_url": "https://api.github.com/users/noncuro/followers", "following_url": "https://api.github.com/users/noncuro/following{/other_user}", "gists_url": "https://api.github.com/users/noncuro/gists{/gist_id}", "starred_url": "https://api.github.com/users/noncuro/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/noncuro/subscriptions", "organizations_url": "https://api.github.com/users/noncuro/orgs", "repos_url": "https://api.github.com/users/noncuro/repos", "events_url": "https://api.github.com/users/noncuro/events{/privacy}", "received_events_url": "https://api.github.com/users/noncuro/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "It looks like when you load a tokenizer from a dir it's also looking for files to load it's related model config via `AutoConfig.from_pretrained`. It does this because it's using the information from the config to to determine which model class the tokenizer belongs to (BERT, XLNet, etc ...) since there is no way of knowing that with the saved tokenizer files themselves. This is deceptive, as intuitively it would seem you should be able to load a tokenizer without needing it's model config. Plus in the documentation of `AutoTokenizer` under examples it states:\r\n\r\n```\r\n# If vocabulary files are in a directory (e.g. tokenizer was saved using `save_pretrained('./test/saved_model/')`)\r\ntokenizer = AutoTokenizer.from_pretrained('./test/bert_saved_model/')\r\n```\r\n\r\nEither:\r\n1. The documentation needs an update to clarify that `config` must be a kwarg of `from_pretrained` or the dir must also have the files for `AutoConfig` (if loading from a dir)\r\n2. `AutoTokenizer` should be changed to be independent from `AutoConfig` (maybe the model base class name should be stored along with \"tokenizer_config.json\")", "I had the same issue loading a saved tokenizer. Separately loading and saving the config file to the tokenizer directory worked.\r\n```\r\nconfig_file = AutoConfig.from_pretrained('distilgpt2')\r\nconfig_file.save_pretrained(saved_tokenizer_directory)\r\n```", "+1 to @jaymody. It seems as though that `AutoTokenizer` is coupled to the model that uses it, and I don't see why it should be impossible to load a tokenizer without an explicit model configuration.\r\n\r\nFor those that happen to be loading a tokenizer and model using the `Auto` classes, I was able to use the following workaround where my tokenizer relies on the configuration in the model directory:\r\n\r\n```\r\ntokenizer = AutoTokenizer.from_pretrained(<tokenizer-dir>, config=AutoConfig.from_pretrained(<model-dir>))\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,588
1,595
1,595
NONE
null
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): Roberta Language I am using the model on (English, Chinese ...): English The problem arises when using: * [x] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) I'm trying to run `run_language_modelling.py` with my own tokenizer The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: ``` !pip install transformers --upgrade from transformers import AutoTokenizer my_tokenizer = AutoTokenizer.from_pretrained("distilroberta-base") !mkdir my_tokenizer my_tokenizer.save_pretrained("my_tokenizer") my_tokenizer2 = AutoTokenizer.from_pretrained("./my_tokenizer") ``` ### Stack Trace: ``` --------------------------------------------------------------------------- OSError Traceback (most recent call last) /usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, pretrained_config_archive_map, **kwargs) 246 resume_download=resume_download, --> 247 local_files_only=local_files_only, 248 ) 4 frames /usr/local/lib/python3.6/dist-packages/transformers/file_utils.py in cached_path(url_or_filename, cache_dir, force_download, proxies, resume_download, user_agent, extract_compressed_file, force_extract, local_files_only) 260 # File, but it doesn't exist. --> 261 raise EnvironmentError("file {} not found".format(url_or_filename)) 262 else: OSError: file ./my_tokenizer/config.json not found During handling of the above exception, another exception occurred: OSError Traceback (most recent call last) <ipython-input-16-b699c4abfe8e> in <module>() 3 get_ipython().system('mkdir my_tokenizer') 4 my_tokenizer.save_pretrained("my_tokenizer") ----> 5 my_tokenizer2 = AutoTokenizer.from_pretrained("./my_tokenizer") /usr/local/lib/python3.6/dist-packages/transformers/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs) 184 config = kwargs.pop("config", None) 185 if not isinstance(config, PretrainedConfig): --> 186 config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs) 187 188 if "bert-base-japanese" in pretrained_model_name_or_path: /usr/local/lib/python3.6/dist-packages/transformers/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs) 185 """ 186 config_dict, _ = PretrainedConfig.get_config_dict( --> 187 pretrained_model_name_or_path, pretrained_config_archive_map=ALL_PRETRAINED_CONFIG_ARCHIVE_MAP, **kwargs 188 ) 189 /usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, pretrained_config_archive_map, **kwargs) 268 ) 269 ) --> 270 raise EnvironmentError(msg) 271 272 except json.JSONDecodeError: OSError: Can't load './my_tokenizer'. Make sure that: - './my_tokenizer' is a correct model identifier listed on 'https://huggingface.co/models' - or './my_tokenizer' is the correct path to a directory containing a 'config.json' file ``` ## Expected behavior AutoTokenizer should be able to load the tokenizer from the file. Possible duplicate of #1063 #3838 <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: Google Colab - Python version: 3.6.9 - PyTorch version (GPU?): N/A - Tensorflow version (GPU?): N/A - Using GPU in script?: No - Using distributed or parallel set-up in script?: No
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4197/reactions", "total_count": 8, "+1": 8, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4197/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4196
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4196/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4196/comments
https://api.github.com/repos/huggingface/transformers/issues/4196/events
https://github.com/huggingface/transformers/pull/4196
614,015,383
MDExOlB1bGxSZXF1ZXN0NDE0NjUyMDQw
4,196
Model card for spanish electra small
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,588
1,588
1,588
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4196/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4196/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4196", "html_url": "https://github.com/huggingface/transformers/pull/4196", "diff_url": "https://github.com/huggingface/transformers/pull/4196.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4196.patch", "merged_at": 1588944616000 }
https://api.github.com/repos/huggingface/transformers/issues/4195
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4195/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4195/comments
https://api.github.com/repos/huggingface/transformers/issues/4195/events
https://github.com/huggingface/transformers/pull/4195
613,986,159
MDExOlB1bGxSZXF1ZXN0NDE0NjI4NzUx
4,195
add XLMRobertaForQuestionAnswering
{ "login": "Liangtaiwan", "id": 20909894, "node_id": "MDQ6VXNlcjIwOTA5ODk0", "avatar_url": "https://avatars.githubusercontent.com/u/20909894?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Liangtaiwan", "html_url": "https://github.com/Liangtaiwan", "followers_url": "https://api.github.com/users/Liangtaiwan/followers", "following_url": "https://api.github.com/users/Liangtaiwan/following{/other_user}", "gists_url": "https://api.github.com/users/Liangtaiwan/gists{/gist_id}", "starred_url": "https://api.github.com/users/Liangtaiwan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Liangtaiwan/subscriptions", "organizations_url": "https://api.github.com/users/Liangtaiwan/orgs", "repos_url": "https://api.github.com/users/Liangtaiwan/repos", "events_url": "https://api.github.com/users/Liangtaiwan/events{/privacy}", "received_events_url": "https://api.github.com/users/Liangtaiwan/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Closing since already implemented" ]
1,588
1,596
1,596
CONTRIBUTOR
null
1. add XLMRobertaForQuestionAnswering in modeling_xlm_roberta.py 2. add XLMRobertaForQuestionAnswering in modeling_auto.py
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4195/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4195/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4195", "html_url": "https://github.com/huggingface/transformers/pull/4195", "diff_url": "https://github.com/huggingface/transformers/pull/4195.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4195.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/4194
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4194/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4194/comments
https://api.github.com/repos/huggingface/transformers/issues/4194/events
https://github.com/huggingface/transformers/pull/4194
613,970,377
MDExOlB1bGxSZXF1ZXN0NDE0NjE2MDY1
4,194
Optimize PyTorch's GPT2 computation graph
{ "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "GPT2 Attention has `merge_heads` method which takes an input as `(batch, heads, seq, repr)` and outputs the flattened _(merged)_ representation of it `(batch, seq, heads * repr)`. \r\n\r\n```python\r\n def merge_heads(self, x):\r\n x = x.transpose(2, 1).contiguous()\r\n new_x_shape = x.size()[:-2] + (x.size(-2) * x.size(-1),)\r\n return x.view(*new_x_shape) # in Tensorflow implem: fct merge_states\r\n```\r\n\r\nThis can be simplified with `flatten()` to avoid constructing a tuple and accessing shape information.\r\n\r\n```python\r\n def merge_heads(self, x):\r\n x = x.transpose(2, 1).contiguous()\r\n return x.flatten(-2) # in Tensorflow implem: fct merge_states\r\n```", "GPT2 Attention has a `Conv1D` projection which is nothing else than a linear projection of the input. \r\n\r\nThe current implementation use BLAS acceleration function `addmm` (Matrix Multiplication + Add) to handle the affine projection `w * x.T + b`. \r\n\r\nThis is currently implemented with the non batched BLAS call and requires flattening the sequence over the batch axis and unflattening after the call: \r\n\r\n```python\r\ndef forward(self, x):\r\n size_out = x.size()[:-1] + (self.nf,)\r\n torch.addmm(self.bias, x, self.weight)\r\n return x.view(*size_out)\r\n```\r\n\r\nWe can optimize this call to remove all the reshape operations by leveraging the batched BLAS call (`baddbmm` : Batched Matrix Multiply + Batched Add) which takes 3D inputs as (`b, m, n)` and thus avoid the reshapings\r\n\r\n```python\r\ndef forward(self, x):\r\n return torch.baddbmm(self.bias[None, None], x, self.weight[None])\r\n```\r\n", "Use tuples instead of list for returning objects from layers (refer to [this SO post](https://stackoverflow.com/a/22140115))", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,588
1,651
1,595
MEMBER
null
Brings various optimizations to make the computation graph more efficient.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4194/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4194/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4194", "html_url": "https://github.com/huggingface/transformers/pull/4194", "diff_url": "https://github.com/huggingface/transformers/pull/4194.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4194.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/4193
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4193/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4193/comments
https://api.github.com/repos/huggingface/transformers/issues/4193/events
https://github.com/huggingface/transformers/issues/4193
613,890,148
MDU6SXNzdWU2MTM4OTAxNDg=
4,193
Language model training with NSP using run_language_modeling.py
{ "login": "Stuffooh", "id": 50005268, "node_id": "MDQ6VXNlcjUwMDA1MjY4", "avatar_url": "https://avatars.githubusercontent.com/u/50005268?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Stuffooh", "html_url": "https://github.com/Stuffooh", "followers_url": "https://api.github.com/users/Stuffooh/followers", "following_url": "https://api.github.com/users/Stuffooh/following{/other_user}", "gists_url": "https://api.github.com/users/Stuffooh/gists{/gist_id}", "starred_url": "https://api.github.com/users/Stuffooh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Stuffooh/subscriptions", "organizations_url": "https://api.github.com/users/Stuffooh/orgs", "repos_url": "https://api.github.com/users/Stuffooh/repos", "events_url": "https://api.github.com/users/Stuffooh/events{/privacy}", "received_events_url": "https://api.github.com/users/Stuffooh/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This is currently not supported. There used to be some community maintained examples IIRC. You can have a look here: https://github.com/huggingface/transformers/tree/1.1.0/examples/lm_finetuning\r\n\r\n" ]
1,588
1,588
1,588
NONE
null
When using "run_language_modeling.py" for fine-tuning BERT is only the MLM task used or also the NSP task as described in the paper?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4193/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4193/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4192
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4192/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4192/comments
https://api.github.com/repos/huggingface/transformers/issues/4192/events
https://github.com/huggingface/transformers/pull/4192
613,887,461
MDExOlB1bGxSZXF1ZXN0NDE0NTUwMTE0
4,192
[Reformer] Doctsring: fix examples again
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,588
1,588
1,588
MEMBER
null
Forgot a fix.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4192/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4192/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4192", "html_url": "https://github.com/huggingface/transformers/pull/4192", "diff_url": "https://github.com/huggingface/transformers/pull/4192.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4192.patch", "merged_at": 1588841689000 }
https://api.github.com/repos/huggingface/transformers/issues/4191
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4191/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4191/comments
https://api.github.com/repos/huggingface/transformers/issues/4191/events
https://github.com/huggingface/transformers/pull/4191
613,877,002
MDExOlB1bGxSZXF1ZXN0NDE0NTQxNzMy
4,191
[Reformer] Fix example and error message
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4191?src=pr&el=h1) Report\n> Merging [#4191](https://codecov.io/gh/huggingface/transformers/pull/4191?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/96c78396ce1baf5e19c0618689005f93c7f42d79&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4191/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4191?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4191 +/- ##\n==========================================\n- Coverage 78.41% 78.41% -0.01% \n==========================================\n Files 120 120 \n Lines 19785 19785 \n==========================================\n- Hits 15515 15514 -1 \n- Misses 4270 4271 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4191?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/4191/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `88.17% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4191/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.77% <0.00%> (-0.17%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4191?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4191?src=pr&el=footer). Last update [96c7839...488d787](https://codecov.io/gh/huggingface/transformers/pull/4191?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,588
1,588
1,588
MEMBER
null
Fix typos and improve error message.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4191/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4191/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4191", "html_url": "https://github.com/huggingface/transformers/pull/4191", "diff_url": "https://github.com/huggingface/transformers/pull/4191.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4191.patch", "merged_at": 1588841412000 }
https://api.github.com/repos/huggingface/transformers/issues/4190
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4190/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4190/comments
https://api.github.com/repos/huggingface/transformers/issues/4190/events
https://github.com/huggingface/transformers/pull/4190
613,871,447
MDExOlB1bGxSZXF1ZXN0NDE0NTM3MzM4
4,190
[Reformer] fix docstring
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,588
1,588
1,588
MEMBER
null
Fix small typo in docstring
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4190/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4190/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4190", "html_url": "https://github.com/huggingface/transformers/pull/4190", "diff_url": "https://github.com/huggingface/transformers/pull/4190.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4190.patch", "merged_at": 1588840112000 }
https://api.github.com/repos/huggingface/transformers/issues/4189
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4189/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4189/comments
https://api.github.com/repos/huggingface/transformers/issues/4189/events
https://github.com/huggingface/transformers/issues/4189
613,762,901
MDU6SXNzdWU2MTM3NjI5MDE=
4,189
Bug: can not use pretrained BERT on multiple GPUs with DataParallel (PyTorch 1.5.0)
{ "login": "erikchwang", "id": 16256959, "node_id": "MDQ6VXNlcjE2MjU2OTU5", "avatar_url": "https://avatars.githubusercontent.com/u/16256959?v=4", "gravatar_id": "", "url": "https://api.github.com/users/erikchwang", "html_url": "https://github.com/erikchwang", "followers_url": "https://api.github.com/users/erikchwang/followers", "following_url": "https://api.github.com/users/erikchwang/following{/other_user}", "gists_url": "https://api.github.com/users/erikchwang/gists{/gist_id}", "starred_url": "https://api.github.com/users/erikchwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/erikchwang/subscriptions", "organizations_url": "https://api.github.com/users/erikchwang/orgs", "repos_url": "https://api.github.com/users/erikchwang/repos", "events_url": "https://api.github.com/users/erikchwang/events{/privacy}", "received_events_url": "https://api.github.com/users/erikchwang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "By the way, when I downgrade Pytorch 1.5.0 to 1.4.0, the error disappears. ", "The same issue: #3936", "Closing in favor of #3936" ]
1,588
1,589
1,589
NONE
null
Python: 3.6.10 PyTorch: 1.5.0 Transformers: 2.8.0 and 2.9.0 In the following code, I wrap the pretrained BERT with a DataParallel wrapper so as to run it on multiple GPUs: > import torch, transformers > model = transformers.AutoModel.from_pretrained("bert-base-multilingual-cased") > model = torch.nn.DataParallel(model) > model = model.cuda() > input = torch.ones([16, 10], dtype=torch.long) > input = input.cuda() > model(input) But I got the following error: > Traceback (most recent call last): > File "<stdin>", line 1, in <module> > File "/home/anaconda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__ > result = self.forward(*input, **kwargs) > File "/home/anaconda/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 155, in forward > outputs = self.parallel_apply(replicas, inputs, kwargs) > File "/home/anaconda/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 165, in parallel_apply > return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) > File "/home/anaconda/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 85, in parallel_apply > output.reraise() > File "/home/anaconda/lib/python3.6/site-packages/torch/_utils.py", line 395, in reraise > raise self.exc_type(msg) > StopIteration: Caught StopIteration in replica 0 on device 0. > Original Traceback (most recent call last): > File "/home/anaconda/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker > output = module(*input, **kwargs) > File "/home/anaconda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__ > result = self.forward(*input, **kwargs) > File "/home/anaconda/lib/python3.6/site-packages/transformers/modeling_bert.py", line 734, in forward > extended_attention_mask = extended_attention_mask.to(dtype=next(self.parameters()).dtype) # fp16 compatibility > StopIteration > But it will work if I remove the DataParallel wrapper.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4189/reactions", "total_count": 6, "+1": 6, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4189/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4188
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4188/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4188/comments
https://api.github.com/repos/huggingface/transformers/issues/4188/events
https://github.com/huggingface/transformers/pull/4188
613,722,647
MDExOlB1bGxSZXF1ZXN0NDE0NDIxMDgx
4,188
Fix Albert Attention
{ "login": "ZhuBaohe", "id": 35796307, "node_id": "MDQ6VXNlcjM1Nzk2MzA3", "avatar_url": "https://avatars.githubusercontent.com/u/35796307?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZhuBaohe", "html_url": "https://github.com/ZhuBaohe", "followers_url": "https://api.github.com/users/ZhuBaohe/followers", "following_url": "https://api.github.com/users/ZhuBaohe/following{/other_user}", "gists_url": "https://api.github.com/users/ZhuBaohe/gists{/gist_id}", "starred_url": "https://api.github.com/users/ZhuBaohe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZhuBaohe/subscriptions", "organizations_url": "https://api.github.com/users/ZhuBaohe/orgs", "repos_url": "https://api.github.com/users/ZhuBaohe/repos", "events_url": "https://api.github.com/users/ZhuBaohe/events{/privacy}", "received_events_url": "https://api.github.com/users/ZhuBaohe/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4188?src=pr&el=h1) Report\n> Merging [#4188](https://codecov.io/gh/huggingface/transformers/pull/4188?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/877fc56410e3a0495f62e07e66a73e6b3b9629bc&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4188/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4188?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4188 +/- ##\n=======================================\n Coverage 78.10% 78.11% \n=======================================\n Files 117 117 \n Lines 18962 18963 +1 \n=======================================\n+ Hits 14811 14813 +2 \n+ Misses 4151 4150 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4188?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4188/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `75.38% <100.00%> (+0.06%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4188/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.92% <0.00%> (-0.13%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4188/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.93% <0.00%> (+0.32%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4188?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4188?src=pr&el=footer). Last update [877fc56...7c64f23](https://codecov.io/gh/huggingface/transformers/pull/4188?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,588
1,594
1,594
CONTRIBUTOR
null
This PR simplifies the code of class AlbertAttention.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4188/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4188/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4188", "html_url": "https://github.com/huggingface/transformers/pull/4188", "diff_url": "https://github.com/huggingface/transformers/pull/4188.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4188.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/4187
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4187/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4187/comments
https://api.github.com/repos/huggingface/transformers/issues/4187/events
https://github.com/huggingface/transformers/issues/4187
613,705,915
MDU6SXNzdWU2MTM3MDU5MTU=
4,187
How to apply Torchtext convenience classes to prepare data for a Transformer?
{ "login": "celsofranssa", "id": 11181748, "node_id": "MDQ6VXNlcjExMTgxNzQ4", "avatar_url": "https://avatars.githubusercontent.com/u/11181748?v=4", "gravatar_id": "", "url": "https://api.github.com/users/celsofranssa", "html_url": "https://github.com/celsofranssa", "followers_url": "https://api.github.com/users/celsofranssa/followers", "following_url": "https://api.github.com/users/celsofranssa/following{/other_user}", "gists_url": "https://api.github.com/users/celsofranssa/gists{/gist_id}", "starred_url": "https://api.github.com/users/celsofranssa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/celsofranssa/subscriptions", "organizations_url": "https://api.github.com/users/celsofranssa/orgs", "repos_url": "https://api.github.com/users/celsofranssa/repos", "events_url": "https://api.github.com/users/celsofranssa/events{/privacy}", "received_events_url": "https://api.github.com/users/celsofranssa/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,588
1,594
1,594
NONE
null
Hello, Reading the tutorial [Language Translation with torchText](https://pytorch.org/tutorials/beginner/torchtext_translation_tutorial.html) I wondered how someone could use those convenience classes (`Field, BucketIterator`) to train/fine-tune a `Transformer`. For instance, I'm currently working with a large dataset distributed in jsonl files which looks like: ```python { "query": "this is a query 1", "doc": "relevant document regarding query 1" }, { "query": "this is a query 2", "doc": "relevant document regarding query 2" }, ... ``` Now, to forward this data into a transformer like Bert, it is necessary to convert this dataset into the format: ```python3 ( #queries { 'input_ids': tensor([ [ 101, 2023, 2003, 1037, 23032, 1015, 102, 0], [ 101, 2023, 2003, 1037, 23032, 1016, 102, 0]]), 'attention_mask': tensor([ [1, 1, 1, 1, 1, 1, 1, 0], [1, 1, 1, 1, 1, 1, 1, 0]]) }, #docs { 'input_ids': tensor([ [ 101, 2023, 2003, 2028, 7882, 6254, 4953, 102], [ 101, 2023, 2003, 2028, 7882, 6254, 4953, 102]]), 'attention_mask': 'input_ids': tensor([ [1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1]]) } ``` So, what would be a clear and efficient approach to apply those convenience classes to tokenize a text dataset to fit it in the required format of a transformer?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4187/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4187/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4186
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4186/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4186/comments
https://api.github.com/repos/huggingface/transformers/issues/4186/events
https://github.com/huggingface/transformers/pull/4186
613,634,282
MDExOlB1bGxSZXF1ZXN0NDE0MzUxNjEy
4,186
Add patience argument to Trainer
{ "login": "thesamuel", "id": 6275391, "node_id": "MDQ6VXNlcjYyNzUzOTE=", "avatar_url": "https://avatars.githubusercontent.com/u/6275391?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thesamuel", "html_url": "https://github.com/thesamuel", "followers_url": "https://api.github.com/users/thesamuel/followers", "following_url": "https://api.github.com/users/thesamuel/following{/other_user}", "gists_url": "https://api.github.com/users/thesamuel/gists{/gist_id}", "starred_url": "https://api.github.com/users/thesamuel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thesamuel/subscriptions", "organizations_url": "https://api.github.com/users/thesamuel/orgs", "repos_url": "https://api.github.com/users/thesamuel/repos", "events_url": "https://api.github.com/users/thesamuel/events{/privacy}", "received_events_url": "https://api.github.com/users/thesamuel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This supercedes https://github.com/huggingface/transformers/pull/2840, where I added patience to the outdated `run_language_modeling.py` script.", "Looking good! Can you add a reference to your original post that this closes https://github.com/huggingface/transformers/issues/4894? Thanks", "Hello, when this feature will be merged? I would like to use it. Thank you.", "> Hello, when this feature will be merged? I would like to use it. Thank you.\r\n\r\nThere are some changes requested that @thesamuel should fix before this can be merged.", "Bump. Early stopping is critical for an automated Trainer that reliably gives us the best model. Current way of figuring out the training stopping point seems to be specifying a static train_epochs but the training duration a model can take depends on way too many factors like learning rate, data complexity, model, model size, optimizer and so on that it is unreasonable to ask the user to specify the epochs in advance.\r\nI believe the current assumption is that people train with very small learning rates so that the loss always seems to keep decreasing very slowly but according to my experience (and on my data) it is a sub-optimal schedule which takes too much time. I see that training with higher learning rates with larger batch sizes and stopping at the early stopping point results in an equally good if not better models. Although this requires use of early stopping.", "I would like to use this early stopping on downstream training.\r\nThe current implementation only stops training by monitoring loss. IMO it should also be possible to monitor other metrics like F1 and ROC-AUC. \r\n\r\nI also would like to add a feature that stores the model each time when the monitored metric improves and then optionaly loads the model after training. Then later evaluation can be done on this \"best\" model.\r\n\r\n@thesamuel @julien-c @kevin-yauris what do you think?", "I plan to work on this once I'm finished with the Funnel Transformer model @PhilipMay (so end of this week, beginning of the next).", "> \r\n> \r\n> I plan to work on this once I'm finished with the Funnel Transformer model @PhilipMay (so end of this week, beginning of the next).\r\n\r\n@sgugger That would be awsome. Maybe you want to get some inspiration from the FARM training loop which is pretty nice IMO:\r\n\r\nhttps://github.com/deepset-ai/FARM/blob/master/farm/train.py#L262-L370", "I just found this PR that was already merged: #7431\r\nI think it solved this...", "Not quite, but it makes implementing it easier.", "> Not quite, but it makes implementing it easier.\r\n\r\nYes - you are right. The patience part is still missing. ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "@sgugger Should we keep this open? You wrote in this thread you will work on this if you find the time, but I am not sure if you plan to use another PR for that.", "There has been a PR merged adding the `EarlyStoppingCallback` (#8581) so I think this can be closed now.", "Thanks @cbrochtrup @sgugger! Sorry I didn't get around to this...", "You're welcome, happy to help!" ]
1,588
1,606
1,606
NONE
null
This closes #4894. # Summary Often, we want to stop training if loss does not improve for a number of epochs. This PR adds a "patience" argument, which is a limit on the number of times we can get a non-improving eval loss before stopping training early. It is implemented by other NLP frameworks, such as AllenNLP (see [trainer.py](https://github.com/allenai/allennlp/blob/master/allennlp/training/trainer.py#L95) and [metric_tracker.py](https://github.com/allenai/allennlp/blob/1a8a12cd1b065d74fec3d2e80105a684736ff709/allennlp/training/metric_tracker.py#L6)). # Motivation This feature allows faster fine-tuning by breaking the training loop early and avoids users the toil of checking metrics on Tensorboard. # Caveats Often, models are evaluated once per epoch, but run_lm_finetuning.py has an option to evaluate after a set number of model update steps (dictated by `--logging_steps` if `--evaluate_during_training` is true). Because of this, I've elected to tie patience to the number of evaluations without improvement in loss.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4186/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4186/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4186", "html_url": "https://github.com/huggingface/transformers/pull/4186", "diff_url": "https://github.com/huggingface/transformers/pull/4186.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4186.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/4185
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4185/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4185/comments
https://api.github.com/repos/huggingface/transformers/issues/4185/events
https://github.com/huggingface/transformers/issues/4185
613,618,890
MDU6SXNzdWU2MTM2MTg4OTA=
4,185
New model request: MobileBERT from Google
{ "login": "forresti", "id": 2020010, "node_id": "MDQ6VXNlcjIwMjAwMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2020010?v=4", "gravatar_id": "", "url": "https://api.github.com/users/forresti", "html_url": "https://github.com/forresti", "followers_url": "https://api.github.com/users/forresti/followers", "following_url": "https://api.github.com/users/forresti/following{/other_user}", "gists_url": "https://api.github.com/users/forresti/gists{/gist_id}", "starred_url": "https://api.github.com/users/forresti/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/forresti/subscriptions", "organizations_url": "https://api.github.com/users/forresti/orgs", "repos_url": "https://api.github.com/users/forresti/repos", "events_url": "https://api.github.com/users/forresti/events{/privacy}", "received_events_url": "https://api.github.com/users/forresti/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
closed
false
null
[]
[ "https://github.com/lonePatient/MobileBert_PyTorch", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "This was completed in #4901" ]
1,588
1,595
1,595
CONTRIBUTOR
null
# 🌟 New model addition This issue is to request adding the MobileBERT model, which was recently released by Carnegie Mellon University, Google Research, and Google Brain. ## Model description MobileBERT is a more computationally-efficient model for achieving BERT-base level accuracy on a smartphone. <!-- Important information --> ## Open source status * [x] the model implementation is available: https://github.com/google-research/google-research/tree/master/mobilebert * [x] the model weights are available: [uncased_L-24_H-128_B-512_A-4_F-4_OPT](https://storage.googleapis.com/cloud-tpu-checkpoints/mobilebert/uncased_L-24_H-128_B-512_A-4_F-4_OPT.tar.gz) (MobileBERT Optimized Uncased English) * [x] who are the authors: Zhiqing Sun, Hongkun Yu (@saberkun), Xiaodan Song, Renjie Liu, Yiming Yang, Denny Zhou
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4185/reactions", "total_count": 6, "+1": 6, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4185/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4184
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4184/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4184/comments
https://api.github.com/repos/huggingface/transformers/issues/4184/events
https://github.com/huggingface/transformers/pull/4184
613,575,969
MDExOlB1bGxSZXF1ZXN0NDE0MzAzNzcy
4,184
Model card for allegro/herbert-klej-cased-tokenizer-v1
{ "login": "rmroczkowski", "id": 64909124, "node_id": "MDQ6VXNlcjY0OTA5MTI0", "avatar_url": "https://avatars.githubusercontent.com/u/64909124?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rmroczkowski", "html_url": "https://github.com/rmroczkowski", "followers_url": "https://api.github.com/users/rmroczkowski/followers", "following_url": "https://api.github.com/users/rmroczkowski/following{/other_user}", "gists_url": "https://api.github.com/users/rmroczkowski/gists{/gist_id}", "starred_url": "https://api.github.com/users/rmroczkowski/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rmroczkowski/subscriptions", "organizations_url": "https://api.github.com/users/rmroczkowski/orgs", "repos_url": "https://api.github.com/users/rmroczkowski/repos", "events_url": "https://api.github.com/users/rmroczkowski/events{/privacy}", "received_events_url": "https://api.github.com/users/rmroczkowski/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,588
1,588
1,588
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4184/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4184/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4184", "html_url": "https://github.com/huggingface/transformers/pull/4184", "diff_url": "https://github.com/huggingface/transformers/pull/4184.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4184.patch", "merged_at": 1588945364000 }
https://api.github.com/repos/huggingface/transformers/issues/4183
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4183/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4183/comments
https://api.github.com/repos/huggingface/transformers/issues/4183/events
https://github.com/huggingface/transformers/pull/4183
613,574,017
MDExOlB1bGxSZXF1ZXN0NDE0MzAyMTkw
4,183
Model card for allegro/herbert-klej-cased-v1
{ "login": "rmroczkowski", "id": 64909124, "node_id": "MDQ6VXNlcjY0OTA5MTI0", "avatar_url": "https://avatars.githubusercontent.com/u/64909124?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rmroczkowski", "html_url": "https://github.com/rmroczkowski", "followers_url": "https://api.github.com/users/rmroczkowski/followers", "following_url": "https://api.github.com/users/rmroczkowski/following{/other_user}", "gists_url": "https://api.github.com/users/rmroczkowski/gists{/gist_id}", "starred_url": "https://api.github.com/users/rmroczkowski/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rmroczkowski/subscriptions", "organizations_url": "https://api.github.com/users/rmroczkowski/orgs", "repos_url": "https://api.github.com/users/rmroczkowski/repos", "events_url": "https://api.github.com/users/rmroczkowski/events{/privacy}", "received_events_url": "https://api.github.com/users/rmroczkowski/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "Great work and great model cards, congrats on beating the state-of-the-art of Polish NLU!", "Thanks!" ]
1,588
1,589
1,588
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4183/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4183/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4183", "html_url": "https://github.com/huggingface/transformers/pull/4183", "diff_url": "https://github.com/huggingface/transformers/pull/4183.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4183.patch", "merged_at": 1588945349000 }
https://api.github.com/repos/huggingface/transformers/issues/4182
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4182/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4182/comments
https://api.github.com/repos/huggingface/transformers/issues/4182/events
https://github.com/huggingface/transformers/issues/4182
613,560,540
MDU6SXNzdWU2MTM1NjA1NDA=
4,182
Is it possible to get document embeddings using GPT-2? If so, how?
{ "login": "youssefavx", "id": 56129524, "node_id": "MDQ6VXNlcjU2MTI5NTI0", "avatar_url": "https://avatars.githubusercontent.com/u/56129524?v=4", "gravatar_id": "", "url": "https://api.github.com/users/youssefavx", "html_url": "https://github.com/youssefavx", "followers_url": "https://api.github.com/users/youssefavx/followers", "following_url": "https://api.github.com/users/youssefavx/following{/other_user}", "gists_url": "https://api.github.com/users/youssefavx/gists{/gist_id}", "starred_url": "https://api.github.com/users/youssefavx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/youssefavx/subscriptions", "organizations_url": "https://api.github.com/users/youssefavx/orgs", "repos_url": "https://api.github.com/users/youssefavx/repos", "events_url": "https://api.github.com/users/youssefavx/events{/privacy}", "received_events_url": "https://api.github.com/users/youssefavx/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Not from Huggingface, but for this, I would use something like SentenceBERT. As far as I know.", "@moinnadeem Thank you for pointing me in that direction! But would sentencebert really work for documents?", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,588
1,594
1,594
NONE
null
# ❓ Questions & Help I posted a question here on SO: https://stackoverflow.com/questions/61641257/how-to-get-document-embeddings-using-gpt-2?noredirect=1#comment109035633_61641257 But unfortunately was informed that "Please notice that SO is not a tutorial service, and also that recommendation requests for external resources are explicitly off-topic." ## Details I'm curious if using GPT-2 might yield a higher accuracy for document vectors (with greatly varying length) or not (would it surpass the state of the art?) Really I'm most interested in document embeddings that are as accurate as possible. I'm wondering if using GPT-2 will get results that are more accurate than Paragraph Vectors for example. I heard that in order to get vectors from GPT-2 "you can use a weighted sum and/or concatenation of vector outputs at its hidden layers (typically the last few hidden layers) as a representation of its corresponding words or even "meaning" of the entire text, although for this role BERT is used more often as it is bi-directional and takes into account of both forward and backward contexts." As a machine learning and NLP beginner, I'd love to know how to go about this, or to be pointed in the right direction to learn more about how to attempt this in Python. I've tried fine-tuning GPT-2 before but I have no idea how to extract vectors from it for text. **A link to original question on Stack Overflow**: https://stackoverflow.com/questions/61641257/how-to-get-document-embeddings-using-gpt-2?noredirect=1#comment109035633_61641257
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4182/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4182/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4181
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4181/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4181/comments
https://api.github.com/repos/huggingface/transformers/issues/4181/events
https://github.com/huggingface/transformers/issues/4181
613,503,438
MDU6SXNzdWU2MTM1MDM0Mzg=
4,181
[Marian] Key-Error for some languages
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 2039044877, "node_id": "MDU6TGFiZWwyMDM5MDQ0ODc3", "url": "https://api.github.com/repos/huggingface/transformers/labels/marian", "name": "marian", "color": "30cc95", "default": false, "description": "" } ]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "marian C++ code uses `<unk>` in this case" ]
1,588
1,589
1,589
CONTRIBUTOR
null
model -> symbol that caused `KeyError`. ```python {'ha-en': '|', 'ber-es': '▁Be', 'pis-fi': '▁|', 'es-mt': '|', 'fr-he': '₫', 'niu-sv': 'OGI', 'fi-fse': '▁rentou', 'fi-mh': '|', 'hr-es': '|', 'fr-ber': '▁devr', 'ase-en': 'olos'} ``` Reproduce ```python pair = 'ber-es' mname = f'Helsinki-NLP/opus-mt-{pair}' tok = MarianTokenizer.from_pretrained(mname) tok.prepare_translation_batch(['Bessif kan ay aɣ-d-iṣaḥ wakud akken ad necc imensi.']) ``` Traceback: ```python --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-55-5da2d9c189be> in <module> 2 mname = f'Helsinki-NLP/opus-mt-{pair}' 3 tok = MarianTokenizer.from_pretrained(mname) ----> 4 tok.prepare_translation_batch(['Bessif kan ay aɣ-d-iṣaḥ wakud akken ad necc imensi.']) ~/transformers_fork/src/transformers/tokenization_marian.py in prepare_translation_batch(self, src_texts, tgt_texts, max_length, pad_to_max_length, return_tensors) 145 max_length=max_length, 146 pad_to_max_length=pad_to_max_length, --> 147 src=True, 148 ) 149 if tgt_texts is None: ~/transformers_fork/src/transformers/tokenization_utils.py in batch_encode_plus(self, batch_text_or_text_pairs, add_special_tokens, max_length, stride, truncation_strategy, pad_to_max_length, is_pretokenized, return_tensors, return_token_type_ids, return_attention_masks, return_overflowing_tokens, return_special_tokens_masks, return_offsets_mapping, return_lengths, **kwargs) 1729 ids, pair_ids = ids_or_pair_ids, None 1730 -> 1731 first_ids = get_input_ids(ids) 1732 second_ids = get_input_ids(pair_ids) if pair_ids is not None else None 1733 input_ids.append((first_ids, second_ids)) ~/transformers_fork/src/transformers/tokenization_utils.py in get_input_ids(text) 1697 if isinstance(text, str): 1698 tokens = self.tokenize(text, add_special_tokens=add_special_tokens, **kwargs) -> 1699 return self.convert_tokens_to_ids(tokens) 1700 elif isinstance(text, (list, tuple)) and len(text) > 0 and isinstance(text[0], str): 1701 return self.convert_tokens_to_ids(text) ~/transformers_fork/src/transformers/tokenization_utils.py in convert_tokens_to_ids(self, tokens) 1340 ids = [] 1341 for token in tokens: -> 1342 ids.append(self._convert_token_to_id_with_added_voc(token)) 1343 return ids 1344 ~/transformers_fork/src/transformers/tokenization_utils.py in _convert_token_to_id_with_added_voc(self, token) 1349 if token in self.added_tokens_encoder: 1350 return self.added_tokens_encoder[token] -> 1351 return self._convert_token_to_id(token) 1352 1353 def _convert_token_to_id(self, token): ~/transformers_fork/src/transformers/tokenization_marian.py in _convert_token_to_id(self, token) 90 91 def _convert_token_to_id(self, token): ---> 92 return self.encoder[token] 93 94 def _tokenize(self, text: str, src=True) -> List[str]: KeyError: '▁Be' ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4181/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4181/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4180
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4180/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4180/comments
https://api.github.com/repos/huggingface/transformers/issues/4180/events
https://github.com/huggingface/transformers/pull/4180
613,477,467
MDExOlB1bGxSZXF1ZXN0NDE0MjIzNzY3
4,180
considering empty input text to avoid string index out of range error
{ "login": "hmohebbi", "id": 16359318, "node_id": "MDQ6VXNlcjE2MzU5MzE4", "avatar_url": "https://avatars.githubusercontent.com/u/16359318?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hmohebbi", "html_url": "https://github.com/hmohebbi", "followers_url": "https://api.github.com/users/hmohebbi/followers", "following_url": "https://api.github.com/users/hmohebbi/following{/other_user}", "gists_url": "https://api.github.com/users/hmohebbi/gists{/gist_id}", "starred_url": "https://api.github.com/users/hmohebbi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hmohebbi/subscriptions", "organizations_url": "https://api.github.com/users/hmohebbi/orgs", "repos_url": "https://api.github.com/users/hmohebbi/repos", "events_url": "https://api.github.com/users/hmohebbi/events{/privacy}", "received_events_url": "https://api.github.com/users/hmohebbi/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,588
1,594
1,594
NONE
null
When the input string is empty, the above condition in the tokenization_roberta will encounter string index out of range error. For example, a pair input string exists in the QQP dataset that has an empty string: {'idx': 362246, 'label': 0, 'question1': b'How can I develop android app?', 'question2': b''}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4180/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4180/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4180", "html_url": "https://github.com/huggingface/transformers/pull/4180", "diff_url": "https://github.com/huggingface/transformers/pull/4180.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4180.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/4179
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4179/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4179/comments
https://api.github.com/repos/huggingface/transformers/issues/4179/events
https://github.com/huggingface/transformers/pull/4179
613,417,949
MDExOlB1bGxSZXF1ZXN0NDE0MTc1ODI1
4,179
Create README.md
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4179?src=pr&el=h1) Report\n> Merging [#4179](https://codecov.io/gh/huggingface/transformers/pull/4179?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ff8ed52dd8c6268f2535c0721cde9e360fbb0ce0&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4179/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4179?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4179 +/- ##\n=======================================\n Coverage 78.79% 78.79% \n=======================================\n Files 114 114 \n Lines 18712 18712 \n=======================================\n Hits 14745 14745 \n Misses 3967 3967 \n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4179?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4179?src=pr&el=footer). Last update [ff8ed52...c11fb93](https://codecov.io/gh/huggingface/transformers/pull/4179?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Really cool. Do they have a downstream task that we can fine-tune on, and get some eval results?", "I will check it again and will let you know.\r\nI am also working on a Molecule Drug Transformer for drug target interaction system. It is more powerful and they provide several downstream tasks and evaluation datasets. Something like this: https://arxiv.org/abs/2004.11424" ]
1,588
1,588
1,588
CONTRIBUTOR
null
model card for my De Novo Drug discovery model using MLM
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4179/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4179/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4179", "html_url": "https://github.com/huggingface/transformers/pull/4179", "diff_url": "https://github.com/huggingface/transformers/pull/4179.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4179.patch", "merged_at": 1588944336000 }
https://api.github.com/repos/huggingface/transformers/issues/4178
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4178/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4178/comments
https://api.github.com/repos/huggingface/transformers/issues/4178/events
https://github.com/huggingface/transformers/pull/4178
613,413,107
MDExOlB1bGxSZXF1ZXN0NDE0MTcxODc0
4,178
[Model Cards] Add 1010 model cards for Helsinki-NLP
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4178?src=pr&el=h1) Report\n> Merging [#4178](https://codecov.io/gh/huggingface/transformers/pull/4178?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ff8ed52dd8c6268f2535c0721cde9e360fbb0ce0&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4178/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4178?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4178 +/- ##\n==========================================\n- Coverage 78.79% 78.79% -0.01% \n==========================================\n Files 114 114 \n Lines 18712 18712 \n==========================================\n- Hits 14745 14744 -1 \n- Misses 3967 3968 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4178?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4178/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.61% <0.00%> (-0.17%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4178?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4178?src=pr&el=footer). Last update [ff8ed52...dc9a6bc](https://codecov.io/gh/huggingface/transformers/pull/4178?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "I am working on a NMT en to es following this format https://github.com/facebookresearch/XLM/#iii-applications-supervised--unsupervised-mt\r\n\r\nIs there any way to integrate my final model on the HUB? (script for conversion or smth like that)", "This creates a lot of files but I think it's fine. Thoughts?", "I went through all the 995 files and they look good", "> I went through all the 995 files and they look good\r\n\r\nhaha", "> This creates a lot of files but I think it's fine. Thoughts?\r\n\r\nI think it's fine as well!", "fwiw, it's 5MB of data. Previously, model_cards was 1MB of data. I think it's fine to merge @julien-c ", "Gunna merge this 7pm EST barring objections @julien-c ", "Can you add them to the S3 bucket instead of the repo? I'll add code tomorrow to display them on the website from the repo (not currently the case)", "Sure, will add once the group naming question is resolved." ]
1,588
1,589
1,589
CONTRIBUTOR
null
- Generated using tools in `convert_marian_to_pytorch.py`. - Takes the bottom most entry in each opus-mt-train/models/*/README.md. This assumes that the bottom entry is most recent. (Created by @jorgtied). - Example below is at `model_cards/Helsinki-NLP/opus-mt-fr-en/README.md` __________________________ # opus-2020-02-26.zip * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download: [opus-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-en/opus-2020-02-26.zip) * test set translations: [opus-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-en/opus-2020-02-26.test.txt) * test set scores: [opus-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-en/opus-2020-02-26.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdiscussdev2015-enfr.fr.en | 33.1 | 0.580 | | newsdiscusstest2015-enfr.fr.en | 38.7 | 0.614 | | newssyscomb2009.fr.en | 30.3 | 0.569 | | news-test2008.fr.en | 26.2 | 0.542 | | newstest2009.fr.en | 30.2 | 0.570 | | newstest2010.fr.en | 32.2 | 0.590 | | newstest2011.fr.en | 33.0 | 0.597 | | newstest2012.fr.en | 32.8 | 0.591 | | newstest2013.fr.en | 33.9 | 0.591 | | newstest2014-fren.fr.en | 37.8 | 0.633 | | Tatoeba.fr.en | 57.5 | 0.720 |
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4178/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4178/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4178", "html_url": "https://github.com/huggingface/transformers/pull/4178", "diff_url": "https://github.com/huggingface/transformers/pull/4178.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4178.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/4177
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4177/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4177/comments
https://api.github.com/repos/huggingface/transformers/issues/4177/events
https://github.com/huggingface/transformers/issues/4177
613,337,980
MDU6SXNzdWU2MTMzMzc5ODA=
4,177
Not able to import certain packages
{ "login": "prajjwal1", "id": 24690051, "node_id": "MDQ6VXNlcjI0NjkwMDUx", "avatar_url": "https://avatars.githubusercontent.com/u/24690051?v=4", "gravatar_id": "", "url": "https://api.github.com/users/prajjwal1", "html_url": "https://github.com/prajjwal1", "followers_url": "https://api.github.com/users/prajjwal1/followers", "following_url": "https://api.github.com/users/prajjwal1/following{/other_user}", "gists_url": "https://api.github.com/users/prajjwal1/gists{/gist_id}", "starred_url": "https://api.github.com/users/prajjwal1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/prajjwal1/subscriptions", "organizations_url": "https://api.github.com/users/prajjwal1/orgs", "repos_url": "https://api.github.com/users/prajjwal1/repos", "events_url": "https://api.github.com/users/prajjwal1/events{/privacy}", "received_events_url": "https://api.github.com/users/prajjwal1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You need to make sure you install from source as documented in the README" ]
1,588
1,588
1,588
CONTRIBUTOR
null
# 🐛 Bug ## Information - I am using `run_glue.py`. - Working with GLUE ## To reproduce Steps to reproduce the behavior: Even though these functions are imported in `__init__.py` (I have thoroughly checked myself), I'm not able to import `EvalPrediction`, `HfArgumentParser`, `Trainer`, `TrainingArguments` ## Expected behavior Should be able to import them. I'm sharing this [colab notebook](https://colab.research.google.com/drive/1CqWzt9A96iUT2vh47qwnMSDMsQHQkRSx) link. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.8 - Platform: Ubuntu 18.04 - Python version: 1.5 - PyTorch version (GPU?): 1.5 - Tensorflow version (GPU?): NA - Using GPU in script?: No - Using distributed or parallel set-up in script?: No
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4177/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4177/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4176
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4176/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4176/comments
https://api.github.com/repos/huggingface/transformers/issues/4176/events
https://github.com/huggingface/transformers/pull/4176
613,251,933
MDExOlB1bGxSZXF1ZXN0NDE0MDQxMDMz
4,176
ONNX conversion script.
{ "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,588
1,651
1,589
MEMBER
null
This PR adds conversion script to export our models to ONNX IR. Plan is to support both PyTorch and TensorFlow: - [x] PyTorch - [ ] TensorFlow TensorFlow currently blocked because of an issue in the conversion script provided by ONNX (seems fixed on master : https://github.com/onnx/tensorflow-onnx/issues/876)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4176/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4176/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4176", "html_url": "https://github.com/huggingface/transformers/pull/4176", "diff_url": "https://github.com/huggingface/transformers/pull/4176.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4176.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/4175
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4175/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4175/comments
https://api.github.com/repos/huggingface/transformers/issues/4175/events
https://github.com/huggingface/transformers/issues/4175
613,181,942
MDU6SXNzdWU2MTMxODE5NDI=
4,175
args.output_dir seems like been ignored
{ "login": "yilanchen6", "id": 38677181, "node_id": "MDQ6VXNlcjM4Njc3MTgx", "avatar_url": "https://avatars.githubusercontent.com/u/38677181?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yilanchen6", "html_url": "https://github.com/yilanchen6", "followers_url": "https://api.github.com/users/yilanchen6/followers", "following_url": "https://api.github.com/users/yilanchen6/following{/other_user}", "gists_url": "https://api.github.com/users/yilanchen6/gists{/gist_id}", "starred_url": "https://api.github.com/users/yilanchen6/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yilanchen6/subscriptions", "organizations_url": "https://api.github.com/users/yilanchen6/orgs", "repos_url": "https://api.github.com/users/yilanchen6/repos", "events_url": "https://api.github.com/users/yilanchen6/events{/privacy}", "received_events_url": "https://api.github.com/users/yilanchen6/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,588
1,594
1,594
NONE
null
https://github.com/huggingface/transformers/blob/a638e986f45b338c86482e1c13e045c06cfeccad/examples/run_squad.py#L814 It seems that args.output_dir is been ignored in checkpoints when args.eval_all_checkpoints is True. It probably should be like this: global_step = checkpoint.split("-")[-1] if (len(checkpoints) > 1 and checkpoint != args.output_dir) else ""
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4175/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4175/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4174
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4174/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4174/comments
https://api.github.com/repos/huggingface/transformers/issues/4174/events
https://github.com/huggingface/transformers/issues/4174
613,109,156
MDU6SXNzdWU2MTMxMDkxNTY=
4,174
Make ElectraPreTrainedModel importable
{ "login": "kumapo", "id": 70637, "node_id": "MDQ6VXNlcjcwNjM3", "avatar_url": "https://avatars.githubusercontent.com/u/70637?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kumapo", "html_url": "https://github.com/kumapo", "followers_url": "https://api.github.com/users/kumapo/followers", "following_url": "https://api.github.com/users/kumapo/following{/other_user}", "gists_url": "https://api.github.com/users/kumapo/gists{/gist_id}", "starred_url": "https://api.github.com/users/kumapo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kumapo/subscriptions", "organizations_url": "https://api.github.com/users/kumapo/orgs", "repos_url": "https://api.github.com/users/kumapo/repos", "events_url": "https://api.github.com/users/kumapo/events{/privacy}", "received_events_url": "https://api.github.com/users/kumapo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for contributing #4173 :)." ]
1,588
1,589
1,589
CONTRIBUTOR
null
# 🚀 Feature request Make `ElectraPreTrainedModel` importable ## Motivation Consistency. `DistilBertPreTrainedModel` and `AlbertPreTrainedModel` are importable but I cannot import `ElectraPretrainedModel` from v2.8.0 (see https://github.com/huggingface/transformers/issues/1968) ## Your contribution https://github.com/huggingface/transformers/pull/4173
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4174/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4174/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4173
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4173/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4173/comments
https://api.github.com/repos/huggingface/transformers/issues/4173/events
https://github.com/huggingface/transformers/pull/4173
613,105,591
MDExOlB1bGxSZXF1ZXN0NDEzOTIyMDk1
4,173
Include ElectraPreTrainedModel into __init__
{ "login": "kumapo", "id": 70637, "node_id": "MDQ6VXNlcjcwNjM3", "avatar_url": "https://avatars.githubusercontent.com/u/70637?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kumapo", "html_url": "https://github.com/kumapo", "followers_url": "https://api.github.com/users/kumapo/followers", "following_url": "https://api.github.com/users/kumapo/following{/other_user}", "gists_url": "https://api.github.com/users/kumapo/gists{/gist_id}", "starred_url": "https://api.github.com/users/kumapo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kumapo/subscriptions", "organizations_url": "https://api.github.com/users/kumapo/orgs", "repos_url": "https://api.github.com/users/kumapo/repos", "events_url": "https://api.github.com/users/kumapo/events{/privacy}", "received_events_url": "https://api.github.com/users/kumapo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,588
1,588
1,588
CONTRIBUTOR
null
https://github.com/huggingface/transformers/issues/4174
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4173/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4173/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4173", "html_url": "https://github.com/huggingface/transformers/pull/4173", "diff_url": "https://github.com/huggingface/transformers/pull/4173.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4173.patch", "merged_at": 1588780823000 }
https://api.github.com/repos/huggingface/transformers/issues/4172
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4172/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4172/comments
https://api.github.com/repos/huggingface/transformers/issues/4172/events
https://github.com/huggingface/transformers/issues/4172
613,097,429
MDU6SXNzdWU2MTMwOTc0Mjk=
4,172
ImportError: cannot import name 'AutoModel' from 'transformers'
{ "login": "akeyhero", "id": 23152825, "node_id": "MDQ6VXNlcjIzMTUyODI1", "avatar_url": "https://avatars.githubusercontent.com/u/23152825?v=4", "gravatar_id": "", "url": "https://api.github.com/users/akeyhero", "html_url": "https://github.com/akeyhero", "followers_url": "https://api.github.com/users/akeyhero/followers", "following_url": "https://api.github.com/users/akeyhero/following{/other_user}", "gists_url": "https://api.github.com/users/akeyhero/gists{/gist_id}", "starred_url": "https://api.github.com/users/akeyhero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/akeyhero/subscriptions", "organizations_url": "https://api.github.com/users/akeyhero/orgs", "repos_url": "https://api.github.com/users/akeyhero/repos", "events_url": "https://api.github.com/users/akeyhero/events{/privacy}", "received_events_url": "https://api.github.com/users/akeyhero/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Got the same result with 2.9.0, i.e. with `# pip install tensorflow==2.0 transformers==2.9.0`", "`AutoModel` is the equivalent of `TFAutoModel` but for PyTorch model classes. If you don't have pytorch installed this is expected.\r\n\r\nUse `TFAutoModel` instead =)\r\n\r\n", "Thank you!", "@julien-c \r\nBut there still is a problem with `transformers-cli`.\r\nNow, people who want to use `transformers-cli download` are required to install PyTorch even when they use Tensorflow only.\r\n\r\nedit:\r\nIt could be the problem specific to `t5-large`. I'll try it later.", "I have pytorch installed but still show the error:\r\n`ImportError: cannot import name 'AutoModelForSequenceClassification'`\r\n\r\nEnvironment info:\r\n* Ubuntu 16.04.6\r\n* Python 3.6.8\r\n* Torch 1.2.0\r\n* CUDA 10.0\r\n* transformers 4.8.2", "@JTWang2000, even without PyTorch you should be able to import `AutoModelForSequenceClassification`:\r\n\r\n```py\r\nPython 3.8.6 (default, Dec 4 2020, 09:21:54) \r\n[GCC 10.2.0] on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> import torch\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\nModuleNotFoundError: No module named 'torch'\r\n>>> from transformers import AutoModelForSequenceClassification\r\n>>>\r\n```\r\n\r\nCan you offer a reproducible example? Thanks!", "@LysandreJik Thank you for your reply\r\n### Steps to reproduce the behavior:\r\n\r\n* Python: 3.6.8\r\n* Ubuntu 16.04.6\r\n* CUDA 10.0\r\n` pip install torch==1.2.0 torchvision==0.4.0`\r\n` pip install transformers`\r\n```\r\nPython 3.6.8 (default, Jul 6 2021, 15:59:52)\r\n[GCC 5.4.0 20160609] on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> from transformers import AutoModel\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\nImportError: cannot import name 'AutoModel'\r\n>>> from transformers import AutoModelForSequenceClassification\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\nImportError: cannot import name 'AutoModelForSequenceClassification'\r\n```", "The may be som error as I am getting:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/usename/dev/speech_recognition.py\", line 1, in <module>\r\n from transformers import Wav2Vec2Tokenizer, Wav2Vec2ForCTC\r\nImportError: cannot import name 'Wav2Vec2ForCTC' from 'transformers' (unknown location)\r\n```\r\n\r\nThe line of code: \r\n\r\n```\r\nfrom transformers import Wav2Vec2Tokenizer, Wav2Vec2ForCTC\r\n```\r\n\r\nTransformers: 4.3.0\r\nPython 3.9.6\r\nOS: Fedora 34", "@JTWang2000 Recent versions of `transformers` are compatible with more recent PyTorch versions, see the [README](https://github.com/huggingface/transformers#with-pip).\r\n\r\n@kennylajara could you try upgrading `transformers` to a more recent version to see if it fixes your issue? (Wav2Vec2 has been improved since v4.3.0) - if not, do you mind linking me to a reproducible code example (with pip installations)/colab notebook or something similar that reproduces your issue?", "**EDIT:** Well... **This** issue has been solved by updating dependencies but now it produces another issue. I will post the link to the reproducible issue in a minute.", "@LysandreJik There you go: [https://github.com/kennylajara/speech-recognition](https://github.com/kennylajara/speech-recognition)", "@LysandreJik Thank you for the reply.\r\nI reinstall PyTorch with version 1.7.0 and now it works. Thanks! ", "@kennylajara answered in a separate issue (please keep issues on different subjects separate, thank you): https://github.com/kennylajara/speech-recognition/issues/1", "In my case I both installed pytorch and tensorflow (noob). Creating new environment just for pytorch help me solve it." ]
1,588
1,661
1,588
NONE
null
# 🐛 Bug (Not sure that it is a bug, but it is too easy to reproduce I think) ## Information I couldn't run `python -c 'from transformers import AutoModel'`, instead getting the error on the titile. ## To reproduce Steps to reproduce the behavior: 1. `$ sudo docker run -it --rm python:3.6 bash` 2. `# pip install tensorflow==2.0 transformers==2.8.0` 3. `# python -c 'from transformers import AutoModel'` ``` Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: cannot import name 'AutoModel' ``` Initially I got this error with `transformers-cli download` : ``` # transformers-cli download t5-large Traceback (most recent call last): File "/usr/local/bin/transformers-cli", line 32, in <module> service.run() File "/usr/local/lib/python3.6/site-packages/transformers/commands/download.py", line 29, in run from transformers import AutoModel, AutoTokenizer ImportError: cannot import name 'AutoModel' ``` ## Expected behavior no import error. ## Environment info - `transformers` version: 2.8.0 - Platform: Linux-4.15.0-99-generic-x86_64-with-debian-10.3 - Python version: 3.6.10 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): 2.0.0 (False) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no Also tested on 3.6@Ubuntu 18.04, 3.8@Docker(python:3.8). The Docker backend is Ubuntu 18.04. Thank you.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4172/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4172/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4171
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4171/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4171/comments
https://api.github.com/repos/huggingface/transformers/issues/4171/events
https://github.com/huggingface/transformers/issues/4171
613,094,452
MDU6SXNzdWU2MTMwOTQ0NTI=
4,171
Encoder/Decoder generation
{ "login": "anishthite", "id": 14373068, "node_id": "MDQ6VXNlcjE0MzczMDY4", "avatar_url": "https://avatars.githubusercontent.com/u/14373068?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anishthite", "html_url": "https://github.com/anishthite", "followers_url": "https://api.github.com/users/anishthite/followers", "following_url": "https://api.github.com/users/anishthite/following{/other_user}", "gists_url": "https://api.github.com/users/anishthite/gists{/gist_id}", "starred_url": "https://api.github.com/users/anishthite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anishthite/subscriptions", "organizations_url": "https://api.github.com/users/anishthite/orgs", "repos_url": "https://api.github.com/users/anishthite/repos", "events_url": "https://api.github.com/users/anishthite/events{/privacy}", "received_events_url": "https://api.github.com/users/anishthite/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "Are you using this exact line\r\n```\r\nmodel = EncoderDecoderModel.from_encoder_decoder_pretrained('bert-base-uncased', 'bert-base-uncased') # initialize Bert2Bert\r\n```\r\nIf yes, then please use paths for your saved model. Few other things to try: verify data pipeline,\r\ntry using beam search or sampling in generate", "Thanks! I am using that exact line. I saved my trained model using save_pretrained() and it saved everything as one file. How would I separate this, or should I just retrain and re-save the encoder and decoder separately? Also, does the untrained model not work due to the untrained cross attention layer? ", "If you saved your model using `.save_pretrained` then you can load it using just `.from_pretrained` as you load any other HF model. Just pass the path of your saved model. You won't need to use `.from_encoder_decoder_pretrained`", "Hi @anishthite,\r\n\r\nHow did you train your Bert2Bert model? Can you post the code you used to train your model here? Dontt worry if it's a very long code snippet :-)", "Hello! I managed to figure out the issue. I retrained and saved the encoder and decoder in their own folders. I then was able to load it in as @patil-suraj suggested. I guess earlier it was loading in the untrained model. Would it be helpful to redefine save_pretrained() for EncoderDecoder models to automatically split it into an encoder and decoder folder I can submit a PR if you want. \r\n```\r\n dataset = QADataset(dataset=args.traindataset, block_size=args.maxseqlen)\r\n qa_loader = DataLoader(dataset, batch_size=args.batch, shuffle=True)\r\n model.train()\r\n optimizer = AdamW(model.parameters(), lr=LEARNING_RATE)\r\n t_total = len(qa_loader) // args.gradient_acums * args.epochs\r\n scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=WARMUP_STEPS, num_training_steps = t_total)\r\n proc_seq_count = 0\r\n sum_loss = 0.0\r\n batch_count = 0\r\n models_folder = \"combinerslargeencoder\"\r\n models_folder2 = \"combinerslargedecoder\"\r\n if not os.path.exists(models_folder):\r\n os.mkdir(models_folder)\r\n if not os.path.exists(models_folder2):\r\n os.mkdir(models_folder2)\r\n for epoch in range(args.epochs):\r\n \r\n print(f\"EPOCH {epoch} started\" + '=' * 30)\r\n \r\n for idx,qa in enumerate(qa_loader):\r\n print(str(idx) + ' ' + str(len(qa_loader)))\r\n inputs, labels = (qa[0], qa[1])\r\n inputs = inputs.to(device)\r\n labels = labels.to(device)\r\n outputs = model(input_ids=inputs, decoder_input_ids=labels, lm_labels=labels)\r\n loss, logits = outputs[:2]\r\n loss = loss / args.gradient_acums\r\n loss.backward()\r\n sum_loss = sum_loss + loss.detach().data\r\n \r\n #proc_seq_count = proc_seq_count + 1\r\n #if proc_seq_count == args.gradient_acums:\r\n # proc_seq_count = 0 \r\n batch_count += 1\r\n if (idx + 1) % args.gradient_acums == 0:\r\n optimizer.step()\r\n scheduler.step() \r\n optimizer.zero_grad()\r\n model.zero_grad()\r\n\r\n if batch_count == 100:\r\n print(f\"sum loss {sum_loss}\")\r\n batch_count = 0\r\n sum_loss = 0.0\r\n \r\n # Store the model after each epoch to compare the performance of them\r\n torch.save(model.state_dict(), os.path.join(models_folder, f\"combined_mymodel_{args.maxseqlen}{epoch}{args.gradient_acums}.pt\"))\r\n model.save_pretrained(models_folder)\r\n model.encoder.save_pretrained(models_folder)\r\n model.decoder.save_pretrained(models_folder2)\r\n evaluate(args, model, tokenizer)\r\n```", "Why do you save the encoder and decoder model seperately?:\r\n```\r\n model.encoder.save_pretrained(models_folder)\r\n model.decoder.save_pretrained(models_folder2)\r\n```\r\nThis line:\r\n```\r\n model.save_pretrained(models_folder)\r\n```\r\nshould be enough. \r\n\r\nWe moved away from saving the model to two separate folders, see: https://github.com/huggingface/transformers/pull/3383. Also the docs: https://huggingface.co/transformers/model_doc/encoderdecoder.html might be useful." ]
1,588
1,595
1,588
NONE
null
Hello! I tried to train a Bert2Bert model for QA generation, however when I try the generate function it returns gibberish. I also tried using the example code below, and that also generated gibberish(the output is "[PAD] leon leon leon leon leonieieieieie shall shall shall shall shall shall shall shall shall"). Is the generate function supposed to work for EncoderDecoder models, and what am I doing wrong? ``` from transformers import EncoderDecoderModel, BertTokenizer import torch tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = EncoderDecoderModel.from_encoder_decoder_pretrained('bert-base-uncased', 'bert-base-uncased') # initialize Bert2Bert generated = model.generate(input_ids, decoder_start_token_id=model.config.decoder.pad_token_id) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4171/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4171/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4170
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4170/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4170/comments
https://api.github.com/repos/huggingface/transformers/issues/4170/events
https://github.com/huggingface/transformers/issues/4170
613,055,589
MDU6SXNzdWU2MTMwNTU1ODk=
4,170
How to position encode a sentence?
{ "login": "saahiluppal", "id": 47444392, "node_id": "MDQ6VXNlcjQ3NDQ0Mzky", "avatar_url": "https://avatars.githubusercontent.com/u/47444392?v=4", "gravatar_id": "", "url": "https://api.github.com/users/saahiluppal", "html_url": "https://github.com/saahiluppal", "followers_url": "https://api.github.com/users/saahiluppal/followers", "following_url": "https://api.github.com/users/saahiluppal/following{/other_user}", "gists_url": "https://api.github.com/users/saahiluppal/gists{/gist_id}", "starred_url": "https://api.github.com/users/saahiluppal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/saahiluppal/subscriptions", "organizations_url": "https://api.github.com/users/saahiluppal/orgs", "repos_url": "https://api.github.com/users/saahiluppal/repos", "events_url": "https://api.github.com/users/saahiluppal/events{/privacy}", "received_events_url": "https://api.github.com/users/saahiluppal/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "You don't need to provide position IDs to the model, the model will create them on its own. Similarly to the `attention_mask` and `token_type_ids`, you only need to provide them if you want them to be different to the default.\r\n\r\nIn the case of position IDs, provide them if you want to use a special scheme/different to what was used when pre-training the model.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,588
1,595
1,595
NONE
null
BertTokenizer provide us with input_ids, attention_mask and token_type_ids but i'm unable to get so called "Position ids". I have read the documentation as well but unable to find anything.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4170/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4170/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4169
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4169/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4169/comments
https://api.github.com/repos/huggingface/transformers/issues/4169/events
https://github.com/huggingface/transformers/pull/4169
613,041,410
MDExOlB1bGxSZXF1ZXN0NDEzODcwMTM4
4,169
Update __init__.py for AlbertMLMHead
{ "login": "zzj0402", "id": 15345547, "node_id": "MDQ6VXNlcjE1MzQ1NTQ3", "avatar_url": "https://avatars.githubusercontent.com/u/15345547?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zzj0402", "html_url": "https://github.com/zzj0402", "followers_url": "https://api.github.com/users/zzj0402/followers", "following_url": "https://api.github.com/users/zzj0402/following{/other_user}", "gists_url": "https://api.github.com/users/zzj0402/gists{/gist_id}", "starred_url": "https://api.github.com/users/zzj0402/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zzj0402/subscriptions", "organizations_url": "https://api.github.com/users/zzj0402/orgs", "repos_url": "https://api.github.com/users/zzj0402/repos", "events_url": "https://api.github.com/users/zzj0402/events{/privacy}", "received_events_url": "https://api.github.com/users/zzj0402/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "The issue was resolved." ]
1,588
1,594
1,594
NONE
null
https://github.com/huggingface/transformers/issues/4168
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4169/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4169/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4169", "html_url": "https://github.com/huggingface/transformers/pull/4169", "diff_url": "https://github.com/huggingface/transformers/pull/4169.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4169.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/4168
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4168/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4168/comments
https://api.github.com/repos/huggingface/transformers/issues/4168/events
https://github.com/huggingface/transformers/issues/4168
613,039,522
MDU6SXNzdWU2MTMwMzk1MjI=
4,168
No name 'AlbertMLMHead' in module 'transformers'
{ "login": "zzj0402", "id": 15345547, "node_id": "MDQ6VXNlcjE1MzQ1NTQ3", "avatar_url": "https://avatars.githubusercontent.com/u/15345547?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zzj0402", "html_url": "https://github.com/zzj0402", "followers_url": "https://api.github.com/users/zzj0402/followers", "following_url": "https://api.github.com/users/zzj0402/following{/other_user}", "gists_url": "https://api.github.com/users/zzj0402/gists{/gist_id}", "starred_url": "https://api.github.com/users/zzj0402/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zzj0402/subscriptions", "organizations_url": "https://api.github.com/users/zzj0402/orgs", "repos_url": "https://api.github.com/users/zzj0402/repos", "events_url": "https://api.github.com/users/zzj0402/events{/privacy}", "received_events_url": "https://api.github.com/users/zzj0402/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
closed
false
null
[]
[ "Hi, you can import it as such:\r\n```py\r\nfrom transformers.modeling_albert import AlbertMLMHead\r\n```\r\n\r\nIt's not in the `__init.py__` as it's not a model but part of one." ]
1,588
1,589
1,589
NONE
null
# 🌟 New model addition ## Model description MLM model head ``` No name 'AlbertMLMHead' in module 'transformers' ``` ## Open source status * [x] the model implementation is available: ``` class AlbertMLMHead(nn.Module): def __init__(self, config): super().__init__() self.LayerNorm = nn.LayerNorm(config.embedding_size) self.bias = nn.Parameter(torch.zeros(config.vocab_size)) self.dense = nn.Linear(config.hidden_size, config.embedding_size) self.decoder = nn.Linear(config.embedding_size, config.vocab_size) self.activation = ACT2FN[config.hidden_act] # Need a link between the two variables so that the bias is correctly resized with `resize_token_embeddings` self.decoder.bias = self.bias def forward(self, hidden_states): hidden_states = self.dense(hidden_states) hidden_states = self.activation(hidden_states) hidden_states = self.LayerNorm(hidden_states) hidden_states = self.decoder(hidden_states) prediction_scores = hidden_states return prediction_scores [DOCS]@add_start_docstrings( "Albert Model with a `language modeling` head on top.", ALBERT_START_DOCSTRING, ) ``` Include in the __initial__ file and make it available for use. * [ ] the model weights are available: (give details) * [ ] who are the authors: (mention them, if possible by @gh-username)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4168/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4168/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4167
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4167/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4167/comments
https://api.github.com/repos/huggingface/transformers/issues/4167/events
https://github.com/huggingface/transformers/pull/4167
612,893,038
MDExOlB1bGxSZXF1ZXN0NDEzNzUxNDky
4,167
change order pytorch/tf in readme
{ "login": "clmnt", "id": 821155, "node_id": "MDQ6VXNlcjgyMTE1NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/821155?v=4", "gravatar_id": "", "url": "https://api.github.com/users/clmnt", "html_url": "https://github.com/clmnt", "followers_url": "https://api.github.com/users/clmnt/followers", "following_url": "https://api.github.com/users/clmnt/following{/other_user}", "gists_url": "https://api.github.com/users/clmnt/gists{/gist_id}", "starred_url": "https://api.github.com/users/clmnt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/clmnt/subscriptions", "organizations_url": "https://api.github.com/users/clmnt/orgs", "repos_url": "https://api.github.com/users/clmnt/repos", "events_url": "https://api.github.com/users/clmnt/events{/privacy}", "received_events_url": "https://api.github.com/users/clmnt/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,588
1,588
1,588
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4167/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4167/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4167", "html_url": "https://github.com/huggingface/transformers/pull/4167", "diff_url": "https://github.com/huggingface/transformers/pull/4167.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4167.patch", "merged_at": 1588797067000 }
https://api.github.com/repos/huggingface/transformers/issues/4166
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4166/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4166/comments
https://api.github.com/repos/huggingface/transformers/issues/4166/events
https://github.com/huggingface/transformers/issues/4166
612,875,957
MDU6SXNzdWU2MTI4NzU5NTc=
4,166
Tapas
{ "login": "flozi00", "id": 47894090, "node_id": "MDQ6VXNlcjQ3ODk0MDkw", "avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4", "gravatar_id": "", "url": "https://api.github.com/users/flozi00", "html_url": "https://github.com/flozi00", "followers_url": "https://api.github.com/users/flozi00/followers", "following_url": "https://api.github.com/users/flozi00/following{/other_user}", "gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}", "starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/flozi00/subscriptions", "organizations_url": "https://api.github.com/users/flozi00/orgs", "repos_url": "https://api.github.com/users/flozi00/repos", "events_url": "https://api.github.com/users/flozi00/events{/privacy}", "received_events_url": "https://api.github.com/users/flozi00/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
closed
false
null
[]
[ "hey, @thomwolf this would be a great addition! have a look into it.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "I think @NielsRogge is working on this over here: https://github.com/NielsRogge/transformers/tree/modeling_tapas" ]
1,588
1,602
1,595
CONTRIBUTOR
null
# 🌟 New model addition ## Model descriptin Tapas extends Bert architecture and is a transformer-based Table QA model Paper: https://arxiv.org/abs/2004.02349 ## Open source status * [X] the model implementation is available: (give details) https://github.com/google-research/tapas * [X] the model weights are available: (give details) The pretrained weights and data for fine-tuning are linked in the repository readme * [x] who are the authors: @thomasmueller-google is one of the authors
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4166/reactions", "total_count": 11, "+1": 6, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 5 }
https://api.github.com/repos/huggingface/transformers/issues/4166/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4165
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4165/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4165/comments
https://api.github.com/repos/huggingface/transformers/issues/4165/events
https://github.com/huggingface/transformers/pull/4165
612,860,574
MDExOlB1bGxSZXF1ZXN0NDEzNzI1MjMw
4,165
[RFC] Sampling transform function for generation
{ "login": "turtlesoupy", "id": 448590, "node_id": "MDQ6VXNlcjQ0ODU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/448590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/turtlesoupy", "html_url": "https://github.com/turtlesoupy", "followers_url": "https://api.github.com/users/turtlesoupy/followers", "following_url": "https://api.github.com/users/turtlesoupy/following{/other_user}", "gists_url": "https://api.github.com/users/turtlesoupy/gists{/gist_id}", "starred_url": "https://api.github.com/users/turtlesoupy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/turtlesoupy/subscriptions", "organizations_url": "https://api.github.com/users/turtlesoupy/orgs", "repos_url": "https://api.github.com/users/turtlesoupy/repos", "events_url": "https://api.github.com/users/turtlesoupy/events{/privacy}", "received_events_url": "https://api.github.com/users/turtlesoupy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "(note that this will need to be refined to work with TF / beam search if we decide on the direction)", "Closing in favor of #5416 " ]
1,588
1,593
1,593
NONE
null
See https://github.com/huggingface/transformers/issues/4164
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4165/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4165/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4165", "html_url": "https://github.com/huggingface/transformers/pull/4165", "diff_url": "https://github.com/huggingface/transformers/pull/4165.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4165.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/4164
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4164/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4164/comments
https://api.github.com/repos/huggingface/transformers/issues/4164/events
https://github.com/huggingface/transformers/issues/4164
612,859,708
MDU6SXNzdWU2MTI4NTk3MDg=
4,164
Add a sampling_transform callback to generation for arbitrary probability-warps
{ "login": "turtlesoupy", "id": 448590, "node_id": "MDQ6VXNlcjQ0ODU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/448590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/turtlesoupy", "html_url": "https://github.com/turtlesoupy", "followers_url": "https://api.github.com/users/turtlesoupy/followers", "following_url": "https://api.github.com/users/turtlesoupy/following{/other_user}", "gists_url": "https://api.github.com/users/turtlesoupy/gists{/gist_id}", "starred_url": "https://api.github.com/users/turtlesoupy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/turtlesoupy/subscriptions", "organizations_url": "https://api.github.com/users/turtlesoupy/orgs", "repos_url": "https://api.github.com/users/turtlesoupy/repos", "events_url": "https://api.github.com/users/turtlesoupy/events{/privacy}", "received_events_url": "https://api.github.com/users/turtlesoupy/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "Very interesting idea! I think we eventually have to make the generation function more general anyways. \r\n\r\nMaybe it's time to move this whole code:\r\n\r\n```python \r\n # repetition penalty from CTRL paper (https://arxiv.org/abs/1909.05858)\r\n if repetition_penalty != 1.0:\r\n self.enforce_repetition_penalty_(next_token_logits, batch_size, 1, input_ids, repetition_penalty)\r\n if no_repeat_ngram_size > 0:\r\n # calculate a list of banned tokens to prevent repetitively generating the same ngrams\r\n # from fairseq: https://github.com/pytorch/fairseq/blob/a07cb6f40480928c9e0548b737aadd36ee66ac76/fairseq/sequence_generator.py#L345\r\n banned_tokens = calc_banned_ngram_tokens(input_ids, batch_size, no_repeat_ngram_size, cur_len)\r\n for batch_idx in range(batch_size):\r\n next_token_logits[batch_idx, banned_tokens[batch_idx]] = -float(\"inf\")\r\n if bad_words_ids is not None:\r\n # calculate a list of banned tokens according to bad words\r\n banned_tokens = calc_banned_bad_words_ids(input_ids, bad_words_ids)\r\n for batch_idx in range(batch_size):\r\n next_token_logits[batch_idx, banned_tokens[batch_idx]] = -float(\"inf\")\r\n # set eos token prob to zero if min_length is not reached\r\n if eos_token_id is not None and cur_len < min_length:\r\n next_token_logits[:, eos_token_id] = -float(\"inf\")\r\n if do_sample:\r\n # Temperature (higher temperature => more likely to sample low probability tokens)\r\n if temperature != 1.0:\r\n next_token_logits = next_token_logits / temperature\r\n # Top-p/top-k filtering\r\n next_token_logits = top_k_top_p_filtering(next_token_logits, top_k=top_k, top_p=top_p)\r\n # Sample\r\n probs = F.softmax(next_token_logits, dim=-1)\r\n next_token = torch.multinomial(probs, num_samples=1).squeeze(1)\r\n\r\n if sampling_transform:\r\n next_token = sampling_transform(input_ids, probs, next_token)\r\n else:\r\n # Greedy decoding\r\n next_token = torch.argmax(next_token_logits, dim=-1)\r\n```\r\n\r\neven to a `Sampler` class with a `sampler.sample(input_ids, next_token_logits)` which can also include a generic function as proposed. \r\n\r\nWhat are your thoughts on this @yjernite @thomwolf @sshleifer @LysandreJik ?", "Having a formal class sounds good to me; personally what I had in mind was a torchvision transforms type interface - so something like\r\n\r\n```\r\nSampler.Compose([\r\n temperature(2) ,\r\n top_p(75),\r\n early_eos(),\r\n my_crazy_custom_transform(),\r\n])\r\n```\r\n\r\nThe current operations are order-dependent which isn't necessarily apparent to the user in params. \r\n", "@turtlesoupy that's definitely a feature we want! Would it be possible to apply the transform at the logit level rather than have it sample too? It seems like it fits the proposed use case and it would make beam search significantly easier.", "@yjernite good call; the interface would have to be modified slightly to `(input_ids, next_probs, next_token) -> (next_probs)`. You might be able to fold the sampling procedure into the transforms. I know in my case I want to operate after a sample has been chosen\r\n\r\n```\r\nSampler.Compose([\r\n one_hot_draw(),\r\n my_crazy_custom_transform(),\r\n])\r\n```\r\n", "@patrickvonplaten @yjernite finally got around to doing this -- can you take a look at #5420 and let me know if it can be merged? ", "Thanks a lot for the PR :-) I will take a look this week!", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,588
1,599
1,599
NONE
null
# 🚀 Feature request I'd like to add `sampling_transform` callback argument to all `generate` functions in `modeling_utils` that allows arbitrary sampling from the probability distribution during sequence generation. The signature of this function would be `(input_ids, next_probs, next_token) -> next_token`. ## Motivation The modeling_utils's generation function is getting pretty hairy -- including parameters for top p, top k, bad tokens, temperature, repetition penalty, etc.. Every new way of sampling means that we have to add more parameters and it further complicate the function. I believe the right way of solving this is to provide a function-argument that allows users to express arbitrary rules for the next sample. In the long run, we could replace all the other parameters with a set of pre-baked sampling_transform functions that can be composed at will. This method scales better to strange warps -- for example, in a project I'm working on (https://github.com/turtlesoupy/this-word-does-not-exist) I need to early-terminate sequences if they generate from a large set of bad tokens and need to continue generating if an EOS token is sampled too early. An example is here https://github.com/turtlesoupy/this-word-does-not-exist/blob/260e33a8f420b9be8b1e7260cb03c74d6231686e/title_maker_pro/datasets.py#L386 ## Your contribution I'll attach a sample PR that makes this work for pytorch and non-beam samples. If people like the idea, it should be easy to refine into a full PR that generalize to beam search and tensorflow.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4164/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4164/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4163
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4163/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4163/comments
https://api.github.com/repos/huggingface/transformers/issues/4163/events
https://github.com/huggingface/transformers/issues/4163
612,849,891
MDU6SXNzdWU2MTI4NDk4OTE=
4,163
Cannot use camembert for question answering
{ "login": "ierezell", "id": 30974685, "node_id": "MDQ6VXNlcjMwOTc0Njg1", "avatar_url": "https://avatars.githubusercontent.com/u/30974685?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ierezell", "html_url": "https://github.com/ierezell", "followers_url": "https://api.github.com/users/ierezell/followers", "following_url": "https://api.github.com/users/ierezell/following{/other_user}", "gists_url": "https://api.github.com/users/ierezell/gists{/gist_id}", "starred_url": "https://api.github.com/users/ierezell/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ierezell/subscriptions", "organizations_url": "https://api.github.com/users/ierezell/orgs", "repos_url": "https://api.github.com/users/ierezell/repos", "events_url": "https://api.github.com/users/ierezell/events{/privacy}", "received_events_url": "https://api.github.com/users/ierezell/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[ { "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false } ]
[]
1,588
1,589
1,589
CONTRIBUTOR
null
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): Camembert Language I am using the model on (English, Chinese ...): French The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: Squad * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Load camembert for Q&A 2. Use the script for Q&A from the HuggingFace Doc 3. Get a Runtimeerror <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> from [https://huggingface.co/transformers/usage.html#question-answering](https://huggingface.co/transformers/usage.html#question-answering) : Note : I tried with `camembert-base`, `illuin/camembert-base-fquad` and `fmikaelian/camembert-base-fquad` ``` from transformers import AutoTokenizer, CamembertForQuestionAnswering import torch tokenizer = AutoTokenizer.from_pretrained("camembert-base") model = CamembertForQuestionAnswering.from_pretrained("camembert-base") text = r"""Some text in french""" questions = ["Just one question in french"] for question in questions: inputs = tokenizer.encode_plus(question, text, add_special_tokens=True, return_tensors="pt") input_ids = inputs["input_ids"].tolist()[0] text_tokens = tokenizer.convert_ids_to_tokens(input_ids) answer_start_scores, answer_end_scores = model(**inputs) answer_start = torch.argmax( answer_start_scores ) # Get the most likely beginning of answer with the argmax of the score answer_end = torch.argmax(answer_end_scores) + 1 # Get the most likely end of answer with the argmax of the score answer = tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end])) print(f"Question: {question}") print(f"Answer: {answer}\n") ``` It fails as well with the `pipeline` method ``` q_a_pipeline = pipeline("question-answering", model=model, tokenizer=tokenizer) q_a_pipeline({'question': question, 'context': text} ``` Stack trace : ``` Traceback (most recent call last): File "/home/covid_nlu/.local/lib/python3.8/site-packages/sanic/app.py", line 976, in handle_request response = await response File "test_server_sanic.py", line 72, in get_answer results = [q_a_pipeline({'question': question, 'context': doc}) File "test_server_sanic.py", line 72, in <listcomp> results = [q_a_pipeline({'question': question, 'context': doc}) File "/home/covid_nlu/.local/lib/python3.8/site-packages/transformers/pipelines.py", line 1109, in __call__ start, end = self.model(**fw_args) File "/home/covid_nlu/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/home/covid_nlu/.local/lib/python3.8/site-packages/transformers/modeling_roberta.py", line 663, in forward outputs = self.roberta( File "/home/covid_nlu/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/home/covid_nlu/.local/lib/python3.8/site-packages/transformers/modeling_bert.py", line 728, in forward embedding_output = self.embeddings( File "/home/covid_nlu/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/home/covid_nlu/.local/lib/python3.8/site-packages/transformers/modeling_roberta.py", line 64, in forward return super().forward( File "/home/covid_nlu/.local/lib/python3.8/site-packages/transformers/modeling_bert.py", line 175, in forward token_type_embeddings = self.token_type_embeddings(token_type_ids) File "/home/covid_nlu/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/home/covid_nlu/.local/lib/python3.8/site-packages/torch/nn/modules/sparse.py", line 112, in forward return F.embedding( File "/home/covid_nlu/.local/lib/python3.8/site-packages/torch/nn/functional.py", line 1724, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) IndexError: index out of range in self ``` ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> Run the exemple in french as with Bert in english Note : I am able to run the exemple with a Hugging Face Pipeline (with all the different camembert model, comunity or not) ``` bert_tok = AutoTokenizer.from_pretrained("camembert-base") bert = CamembertForQuestionAnswering.from_pretrained("camembert-base") nlp = pipeline('question-answering', model=bert, tokenizer=bert_tok) answer = nlp({'question': "A question in french", 'context': a_big_string_in_french}) print(answer) ``` ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.8.0 - Platform: Linux-5.6.10-arch1-1-x86_64-with-glibc2.2.5 - Python version: 3.8.2 - PyTorch version (GPU?): 1.4.0 (True) (Same with 1.5) - Tensorflow version (GPU?): 2.2.0-rc4 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4163/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4163/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4162
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4162/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4162/comments
https://api.github.com/repos/huggingface/transformers/issues/4162/events
https://github.com/huggingface/transformers/pull/4162
612,818,401
MDExOlB1bGxSZXF1ZXN0NDEzNjkwODA4
4,162
Add model card for the NER model
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,588
1,589
1,588
CONTRIBUTOR
null
Add the model-card for a new NER model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4162/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4162/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4162", "html_url": "https://github.com/huggingface/transformers/pull/4162", "diff_url": "https://github.com/huggingface/transformers/pull/4162.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4162.patch", "merged_at": 1588776056000 }
https://api.github.com/repos/huggingface/transformers/issues/4161
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4161/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4161/comments
https://api.github.com/repos/huggingface/transformers/issues/4161/events
https://github.com/huggingface/transformers/issues/4161
612,816,172
MDU6SXNzdWU2MTI4MTYxNzI=
4,161
run_generation.py - use of < redirect for input text file only reads first line - solution requested
{ "login": "GenTxt", "id": 22547261, "node_id": "MDQ6VXNlcjIyNTQ3MjYx", "avatar_url": "https://avatars.githubusercontent.com/u/22547261?v=4", "gravatar_id": "", "url": "https://api.github.com/users/GenTxt", "html_url": "https://github.com/GenTxt", "followers_url": "https://api.github.com/users/GenTxt/followers", "following_url": "https://api.github.com/users/GenTxt/following{/other_user}", "gists_url": "https://api.github.com/users/GenTxt/gists{/gist_id}", "starred_url": "https://api.github.com/users/GenTxt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/GenTxt/subscriptions", "organizations_url": "https://api.github.com/users/GenTxt/orgs", "repos_url": "https://api.github.com/users/GenTxt/repos", "events_url": "https://api.github.com/users/GenTxt/events{/privacy}", "received_events_url": "https://api.github.com/users/GenTxt/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "I think in your case it might make more sense to use the `TextGenerationPipeline`. It's a two liner that does the same as the script. You can wrap a `open(inpt_text_file)` and `file.write()` around these two lines in your use case. \r\n\r\nSee documentation for the generation pipeline here: https://huggingface.co/transformers/usage.html#text-generation", "Closing for now" ]
1,588
1,591
1,591
NONE
null
Example: python3 examples/run_generation.py --model_type=gpt2 --model_name_or_path="models/custom-774M" --length=400 < test.txt > test_gpt2_400.txt Version 2.0.3 and earlier accepts a simple linux redirect < test.txt and reads the file line-by-line while generating the corresponding output to > test_gpt2_400_out.txt This works perfectly and does not require any changes. The test.txt is a simple 3 line text file with 1 sentence on each line. Versions 2.0.4 to 2.0.8 do not accept the < redirect in the same manner. The above only reads the first line of 'test.txt', generates, then exits. Output is the same for the first line. Is there a simple change to the 2.0.8 version of 'run_generation.py' that will return this useful terminal function under ubuntu 18.04? Have tried 'while - read' loops and only managed to generate the Model prompt >>> 3x requesting terminal input: exec 3<"test.txt" while IFS= read -r line <&3; do python3 examples/run_generation.py --model_type=gpt2 --model_name_or_path="models/custom-774M" --length=400 done > test_gpt2_400_out.txt Any help is appreciated.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4161/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4161/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4160
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4160/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4160/comments
https://api.github.com/repos/huggingface/transformers/issues/4160/events
https://github.com/huggingface/transformers/issues/4160
612,783,395
MDU6SXNzdWU2MTI3ODMzOTU=
4,160
[Marian] Multilingual models require language codes
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 2039044877, "node_id": "MDU6TGFiZWwyMDM5MDQ0ODc3", "url": "https://api.github.com/repos/huggingface/transformers/labels/marian", "name": "marian", "color": "30cc95", "default": false, "description": "" } ]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[]
1,588
1,589
1,589
CONTRIBUTOR
null
[ ] figure out codes [ ] update/subclass `MarianTokenizer`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4160/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4160/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4159
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4159/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4159/comments
https://api.github.com/repos/huggingface/transformers/issues/4159/events
https://github.com/huggingface/transformers/pull/4159
612,780,029
MDExOlB1bGxSZXF1ZXN0NDEzNjYwMDg1
4,159
Tokenizer.batch_decode convenience method
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4159?src=pr&el=h1) Report\n> Merging [#4159](https://codecov.io/gh/huggingface/transformers/pull/4159?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7822cd38a0e18004ab1a55bfe85e8b3bc0d8857a&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4159/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4159?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4159 +/- ##\n=======================================\n Coverage 78.14% 78.15% \n=======================================\n Files 120 120 \n Lines 20053 20053 \n=======================================\n+ Hits 15670 15672 +2 \n+ Misses 4383 4381 -2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4159?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/4159/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `87.32% <ø> (-0.35%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4159/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.51% <100.00%> (+0.02%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4159/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.60% <0.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4159/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (+0.41%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4159?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4159?src=pr&el=footer). Last update [7822cd3...cc21682](https://codecov.io/gh/huggingface/transformers/pull/4159?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "I'm in favor of this since we also have a `batch_encode_plus` method. ", "One small thing: the `encode_plus` method uses the `batch_encode_plus` whereas here it's the other way around. But this does not bother me too much.", "I don't have a strong knowledge of the API design for tokenizers so will let others chime in instead.\r\n" ]
1,588
1,589
1,589
CONTRIBUTOR
null
convenience method that turns the list returned by `model.generate` into a list of sentences.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4159/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4159/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4159", "html_url": "https://github.com/huggingface/transformers/pull/4159", "diff_url": "https://github.com/huggingface/transformers/pull/4159.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4159.patch", "merged_at": 1589478647000 }
https://api.github.com/repos/huggingface/transformers/issues/4158
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4158/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4158/comments
https://api.github.com/repos/huggingface/transformers/issues/4158/events
https://github.com/huggingface/transformers/issues/4158
612,773,264
MDU6SXNzdWU2MTI3NzMyNjQ=
4,158
[Marian] @-@ symbol causes strange generations
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 2039044877, "node_id": "MDU6TGFiZWwyMDM5MDQ0ODc3", "url": "https://api.github.com/repos/huggingface/transformers/labels/marian", "name": "marian", "color": "30cc95", "default": false, "description": "" } ]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "This is only an issue for BPE models, which we are not supporting." ]
1,588
1,589
1,589
CONTRIBUTOR
null
good fr-en test case: ``` Veuillez m' apporter une demi @-@ bouteille de vin . ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4158/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4158/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4157
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4157/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4157/comments
https://api.github.com/repos/huggingface/transformers/issues/4157/events
https://github.com/huggingface/transformers/issues/4157
612,772,673
MDU6SXNzdWU2MTI3NzI2NzM=
4,157
[Marian] Readme parser defaults to porting oldest model
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 2039044877, "node_id": "MDU6TGFiZWwyMDM5MDQ0ODc3", "url": "https://api.github.com/repos/huggingface/transformers/labels/marian", "name": "marian", "color": "30cc95", "default": false, "description": "" } ]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "Not true, fixed on master." ]
1,588
1,588
1,588
CONTRIBUTOR
null
- logic should be newest model or all models with unique names. - the download URL (and probably other metadata) should be put in a `model_cards/` file.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4157/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4157/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4156
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4156/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4156/comments
https://api.github.com/repos/huggingface/transformers/issues/4156/events
https://github.com/huggingface/transformers/pull/4156
612,756,040
MDExOlB1bGxSZXF1ZXN0NDEzNjQwNzg1
4,156
Removed the use of deprecated Variable API in PPLM example
{ "login": "prajjwal1", "id": 24690051, "node_id": "MDQ6VXNlcjI0NjkwMDUx", "avatar_url": "https://avatars.githubusercontent.com/u/24690051?v=4", "gravatar_id": "", "url": "https://api.github.com/users/prajjwal1", "html_url": "https://github.com/prajjwal1", "followers_url": "https://api.github.com/users/prajjwal1/followers", "following_url": "https://api.github.com/users/prajjwal1/following{/other_user}", "gists_url": "https://api.github.com/users/prajjwal1/gists{/gist_id}", "starred_url": "https://api.github.com/users/prajjwal1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/prajjwal1/subscriptions", "organizations_url": "https://api.github.com/users/prajjwal1/orgs", "repos_url": "https://api.github.com/users/prajjwal1/repos", "events_url": "https://api.github.com/users/prajjwal1/events{/privacy}", "received_events_url": "https://api.github.com/users/prajjwal1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Could you please tell what does this mean ?\r\n```\r\n#!/bin/bash -eo pipefail\r\nblack --check --line-length 119 --target-version py35 examples templates tests src utils\r\n\r\nwould reformat /home/circleci/transformers/examples/pplm/run_pplm.py\r\nOh no! 💥 💔 💥\r\n1 file would be reformatted, 266 files would be left unchanged.\r\n\r\nExited with code exit status 1\r\n```\r", "You need to run `make style` and `make quality` locally and push the resulting changes", "Also this file was moved around so you'll need to re-apply your changes to new location", "I ran both make commands as you said.\r\n```\r\n(base) u37216@s001-n007:~/transformers$ make style\r\nblack --line-length 119 --target-version py35 examples templates tests src utils\r\nreformatted /home/u37216/transformers/src/transformers/__init__.py\r\nreformatted /home/u37216/transformers/templates/adding_a_new_example_script/run_xxx.py\r\nreformatted /home/u37216/transformers/templates/adding_a_new_example_script/utils_xxx.py\r\nAll done! ✨ 🍰 ✨\r\n3 files reformatted, 272 files left unchanged.\r\nisort --recursive examples templates tests src utils\r\nFixing /home/u37216/transformers/templates/adding_a_new_example_script/run_xxx.py\r\nFixing /home/u37216/transformers/templates/adding_a_new_example_script/utils_xxx.py\r\nFixing /home/u37216/transformers/src/transformers/__init__.py\r\n(base) u37216@s001-n007:~/transformers$ make quality\r\nblack --check --line-length 119 --target-version py35 examples templates tests src utils\r\nwould reformat /home/u37216/transformers/src/transformers/__init__.py\r\nwould reformat /home/u37216/transformers/templates/adding_a_new_example_script/run_xxx.py\r\nwould reformat /home/u37216/transformers/templates/adding_a_new_example_script/utils_xxx.py\r\nOh no! 💥 💔 💥\r\n3 files would be reformatted, 272 files would be left unchanged.\r\nMakefile:6: recipe for target 'quality' failed\r\nmake: *** [quality] Error 1\r\n```\r\nIs this related to my code ? I didn't even touch other parts of the file. I think the formatting issue came from master itself (I checked it). ", "@julien-c Can you please see this ?" ]
1,588
1,590
1,590
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4156/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4156/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4156", "html_url": "https://github.com/huggingface/transformers/pull/4156", "diff_url": "https://github.com/huggingface/transformers/pull/4156.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4156.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/4155
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4155/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4155/comments
https://api.github.com/repos/huggingface/transformers/issues/4155/events
https://github.com/huggingface/transformers/issues/4155
612,664,241
MDU6SXNzdWU2MTI2NjQyNDE=
4,155
num_samples=0 when using pretrained model
{ "login": "Rajashan", "id": 8950525, "node_id": "MDQ6VXNlcjg5NTA1MjU=", "avatar_url": "https://avatars.githubusercontent.com/u/8950525?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rajashan", "html_url": "https://github.com/Rajashan", "followers_url": "https://api.github.com/users/Rajashan/followers", "following_url": "https://api.github.com/users/Rajashan/following{/other_user}", "gists_url": "https://api.github.com/users/Rajashan/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rajashan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rajashan/subscriptions", "organizations_url": "https://api.github.com/users/Rajashan/orgs", "repos_url": "https://api.github.com/users/Rajashan/repos", "events_url": "https://api.github.com/users/Rajashan/events{/privacy}", "received_events_url": "https://api.github.com/users/Rajashan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, did you find out the reason for this error? I get the same error even on Wikitext-2 corpus ", "Yeah, it is an issue with how your data is read. Try the --line_by_line argument. " ]
1,588
1,589
1,588
NONE
null
When training the "run_language_modelling.py" example with a pretrained bert model, I get the error: """ Traceback (most recent call last): File "run_language_modeling.py", line 284, in <module> main() File "run_language_modeling.py", line 254, in main trainer.train(model_path=model_path) File "C:\Users\AppData\Roaming\Python\Python38\site-packages\transformers\trainer.py", line 243, in train train_dataloader = self.get_train_dataloader() File "C:\Users\AppData\Roaming\Python\Python38\site-packages\transformers\trainer.py", line 179, in get_train_dataloader RandomSampler(self.train_dataset) if self.args.local_rank == -1 else DistributedSampler(self.train_dataset) File "C:\Users\Anaconda3\envs\kompetence\lib\site-packages\torch\utils\data\sampler.py", line 93, in __init__ raise ValueError("num_samples should be a positive integer " ValueError: num_samples should be a positive integer value, but got num_samples=0 """ The cached_lm_berttokenizer that gets saved to the data folder looks like this: """ €]”. """ In the folder for my bert model, I have a config.json, pytorch_model.bin, special_tokens_map.json, vocab.txt, tokenizer_config.json. I run the following command python run_language_modeling.py --output_dir=. --model_type=bert --model_name_or_path="H:\danish_bert_uncased_v2\\" --mlm --train_data_file="H:\\data\train.txt" --eval_data_file="H:\data\test.txt" --do_eval --do_train --overwrite_output_dir -block_size=1000 Is anything missing here? The data is composed of sentences seperated by newlines and an empty line between documents.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4155/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4155/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4154
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4154/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4154/comments
https://api.github.com/repos/huggingface/transformers/issues/4154/events
https://github.com/huggingface/transformers/pull/4154
612,656,937
MDExOlB1bGxSZXF1ZXN0NDEzNTU5OTY0
4,154
Rewritten batch support in pipelines.
{ "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4154?src=pr&el=h1) Report\n> Merging [#4154](https://codecov.io/gh/huggingface/transformers/pull/4154?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/818463ee8eaf3a1cd5ddc2623789cbd7bb517d02&el=desc) will **increase** coverage by `0.03%`.\n> The diff coverage is `96.42%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4154/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4154?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4154 +/- ##\n==========================================\n+ Coverage 78.79% 78.83% +0.03% \n==========================================\n Files 114 114 \n Lines 18711 18726 +15 \n==========================================\n+ Hits 14743 14762 +19 \n+ Misses 3968 3964 -4 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4154?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/4154/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `76.27% <96.42%> (+1.32%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4154/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.77% <0.00%> (+0.16%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4154?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4154?src=pr&el=footer). Last update [818463e...95a257c](https://codecov.io/gh/huggingface/transformers/pull/4154?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Ok so this should be ready to merge, right @mfuntowicz?", "+1 Would love a merge here so I can more easily improve `TranslationPipeline`" ]
1,588
1,588
1,588
MEMBER
null
Batch support in Pipeline was confusing and not well tested. This PR rewrites all the content of `DefaultArgumentHandler` which handles most of the input conversions (args, kwargs, batched, etc.) and brings unit tests on this specific class. Signed-off-by: Morgan Funtowicz <[email protected]>
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4154/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4154/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4154", "html_url": "https://github.com/huggingface/transformers/pull/4154", "diff_url": "https://github.com/huggingface/transformers/pull/4154.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4154.patch", "merged_at": 1588859561000 }
https://api.github.com/repos/huggingface/transformers/issues/4153
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4153/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4153/comments
https://api.github.com/repos/huggingface/transformers/issues/4153/events
https://github.com/huggingface/transformers/issues/4153
612,655,536
MDU6SXNzdWU2MTI2NTU1MzY=
4,153
Embedding index getting out of range while running camemebert model
{ "login": "ierezell", "id": 30974685, "node_id": "MDQ6VXNlcjMwOTc0Njg1", "avatar_url": "https://avatars.githubusercontent.com/u/30974685?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ierezell", "html_url": "https://github.com/ierezell", "followers_url": "https://api.github.com/users/ierezell/followers", "following_url": "https://api.github.com/users/ierezell/following{/other_user}", "gists_url": "https://api.github.com/users/ierezell/gists{/gist_id}", "starred_url": "https://api.github.com/users/ierezell/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ierezell/subscriptions", "organizations_url": "https://api.github.com/users/ierezell/orgs", "repos_url": "https://api.github.com/users/ierezell/repos", "events_url": "https://api.github.com/users/ierezell/events{/privacy}", "received_events_url": "https://api.github.com/users/ierezell/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I am running into the same error on my own script. Interestingly it only appears on CPU... Did you find a solution?", "No, I want to get a French Q&A pipeline, surprinsingly, with the hugging face pipeline everything works great, I can plug the code in a local server and make requests on it. \r\n\r\nBut when I try to use the same code in a docker envrionement to ship it, it fail with this error (only in french with camembert, classic bert works fine)\r\n\r\nI get the error locally as well if I try not to use the hugging face pipeline but write my own inference (as described above)", "I can confirm it's working on GPU local (and even in a docker) but still stuck on CPU", "I actually figured out my error. I was adding special tokens to the tokenizer (like begin-of-sequence) but did not resize the models token embeddings via: \r\n`model.resize_token_embeddings(len(self.tokenizer))`\r\nJust in case someone else is not reading the documentation carefully enough :see_no_evil: \r\nConsidering that, the error message did actually make sense. ", "Hi @Ierezell, there is indeed an issue which I'm patching in #4289. Please be aware that you're using `CamembertModel` which cannot be used for question answering. Please use `CamembertForQuestionAnswering` instead.", "It's patched now, please install from source and there should be no error anymore!", "Hi @LysandreJik, I'm concious that I used it with a non QA model but it was to try the base model supported by hugging face. \r\n\r\nI tried as well with `illuin/camembert-base-fquad` (large as well) and with `fmikaelian/camembert-base-fquad` \r\n\r\nI will install the latest version and try it.\r\n\r\nThnaks a lot for the fast support !", "I tried your fix but it lead to key errors : \r\n\r\n```\r\n File \"/home/pedro/.local/lib/python3.8/site-packages/transformers/pipelines.py\", line 1156, in __call__\r\n answers += [\r\n File \"/home/pedro/.local/lib/python3.8/site-packages/transformers/pipelines.py\", line 1159, in <listcomp>\r\n \"start\": np.where(char_to_word == feature.token_to_orig_map[s])[0][0].item(),\r\nKeyError: 0\r\n```", "Could you provide a reproducible script? I can't reproduce.", "My problem here was surely linked with #4674 everything seems to work now, thanks a lot" ]
1,588
1,591
1,589
CONTRIBUTOR
null
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): Camembert Language I am using the model on (English, Chinese ...): French The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Take a file with french text 2. Load pretrained Camembert Model and tokenizer as in the doc 3. Run inference <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> * Initialisation : `bert = CamembertModel.from_pretrained("camembert-base")` `bert_tok = CamembertTokenizer.from_pretrained("camembert-base")` * Inference : like [https://huggingface.co/transformers/usage.html#question-answering](https://huggingface.co/transformers/usage.html#question-answering) ``` inputs = bert_tok.encode_plus(question, context, add_special_tokens=True, return_tensors="pt") input_ids = inputs["input_ids"].tolist()[0] text_tokens = bert_tok.convert_ids_to_tokens(input_ids) answer_start_scores, answer_end_scores = bert(**inputs) ``` It works by removing the context argument (text_pair argument) but I need it to do question answering with other models and it lead to the same error with pipelines * Stack trace : ``` IndexError Traceback (most recent call last) <ipython-input-9-73762e6cf69b> in <module> 2 for utterances in file.readlines(): 3 input_tensor = bert_tok.batch_encode_plus([utterances], pad_to_max_length=True, return_tensors="pt") ----> 4 last_hidden, pool = bert(input_tensor["input_ids"], input_tensor["attention_mask"]) 5 6 ~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 548 functools.update_wrapper(wrapper, hook) 549 grad_fn.register_hook(wrapper) --> 550 return result 551 552 def __setstate__(self, state): ~/.local/lib/python3.8/site-packages/transformers/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask) 780 head_mask = [None] * self.config.num_hidden_layers 781 --> 782 embedding_output = self.embeddings( 783 input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds 784 ) ~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 548 functools.update_wrapper(wrapper, hook) 549 grad_fn.register_hook(wrapper) --> 550 return result 551 552 def __setstate__(self, state): ~/.local/lib/python3.8/site-packages/transformers/modeling_roberta.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds) 62 position_ids = self.create_position_ids_from_inputs_embeds(inputs_embeds) 63 ---> 64 return super().forward( 65 input_ids, token_type_ids=token_type_ids, position_ids=position_ids, inputs_embeds=inputs_embeds 66 ) ~/.local/lib/python3.8/site-packages/transformers/modeling_bert.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds) 172 if inputs_embeds is None: 173 inputs_embeds = self.word_embeddings(input_ids) --> 174 position_embeddings = self.position_embeddings(position_ids) 175 token_type_embeddings = self.token_type_embeddings(token_type_ids) 176 ~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 548 functools.update_wrapper(wrapper, hook) 549 grad_fn.register_hook(wrapper) --> 550 return result 551 552 def __setstate__(self, state): ~/.local/lib/python3.8/site-packages/torch/nn/modules/sparse.py in forward(self, input) 110 111 def forward(self, input): --> 112 return F.embedding( 113 input, self.weight, self.padding_idx, self.max_norm, 114 self.norm_type, self.scale_grad_by_freq, self.sparse) ~/.local/lib/python3.8/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 1722 if dim == 3: 1723 div = pad(div, (0, 0, size // 2, (size - 1) // 2)) -> 1724 div = avg_pool2d(div, (size, 1), stride=1).squeeze(1) 1725 else: 1726 sizes = input.size() IndexError: index out of range in self ``` ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> Run inference without any error ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> `transformers` version: 2.8.0 - Platform: Linux-5.6.10-arch1-1-x86_64-with-glibc2.2.5 - Python version: 3.8.2 - PyTorch version (GPU?): 1.4.0 (True) (Same with 1.5) - Tensorflow version (GPU?): 2.2.0-rc4 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4153/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4153/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4152
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4152/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4152/comments
https://api.github.com/repos/huggingface/transformers/issues/4152/events
https://github.com/huggingface/transformers/pull/4152
612,579,185
MDExOlB1bGxSZXF1ZXN0NDEzNDk2OTk0
4,152
[Marian] documentation and AutoModel support
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 2039044877, "node_id": "MDU6TGFiZWwyMDM5MDQ0ODc3", "url": "https://api.github.com/repos/huggingface/transformers/labels/marian", "name": "marian", "color": "30cc95", "default": false, "description": "" } ]
closed
false
null
[]
[ "### Screenshots of Documentation\r\n\r\n![image](https://user-images.githubusercontent.com/6045025/81184705-b8f5f700-8f7e-11ea-896f-9571d542cb32.png)\r\n![image](https://user-images.githubusercontent.com/6045025/81184741-c317f580-8f7e-11ea-96f2-a38c89224f7b.png)\r\n\r\n[ x] Shows up in TOC:\r\n![image](https://user-images.githubusercontent.com/6045025/81184776-cd39f400-8f7e-11ea-8420-cf5f34eac1d7.png)\r\n", "The documentation looks great!", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4152?src=pr&el=h1) Report\n> Merging [#4152](https://codecov.io/gh/huggingface/transformers/pull/4152?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a01a3fecb4fe203c086d3d7450b76bbcfa035725&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4152/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4152?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4152 +/- ##\n=======================================\n Coverage 77.52% 77.52% \n=======================================\n Files 120 120 \n Lines 19932 19932 \n=======================================\n Hits 15453 15453 \n Misses 4479 4479 \n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4152?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4152?src=pr&el=footer). Last update [a01a3fe...a01a3fe](https://codecov.io/gh/huggingface/transformers/pull/4152?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,588
1,589
1,589
CONTRIBUTOR
null
- Adds integration tests for en-fr, fr-en. - Easier bulk conversion - remove unused pretrained_model_archive_map constant. - boilerplate to make AutoModelWithLMHead, AutoTokenizer, AutoConfig work. ### Metrics: For [fr-en test set](https://object.pouta.csc.fi/OPUS-MT-models/fr-en/opus-2020-02-26.test.txt): - BLEU score from posted translations: 57.4979 - BLEU score from `MarianMTModel.from_pretrained('Helsinki-NLP/opus-mt-fr-en')`: 57.4817 - no perf change for fp16 - can fit batch_size=512 on 16GB card in fp16 - speed: 89s for 5k examples = 56 examples/second
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4152/reactions", "total_count": 5, "+1": 0, "-1": 0, "laugh": 0, "hooray": 3, "confused": 0, "heart": 0, "rocket": 2, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4152/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4152", "html_url": "https://github.com/huggingface/transformers/pull/4152", "diff_url": "https://github.com/huggingface/transformers/pull/4152.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4152.patch", "merged_at": 1589133297000 }
https://api.github.com/repos/huggingface/transformers/issues/4151
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4151/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4151/comments
https://api.github.com/repos/huggingface/transformers/issues/4151/events
https://github.com/huggingface/transformers/issues/4151
612,482,533
MDU6SXNzdWU2MTI0ODI1MzM=
4,151
How to pre-train BART model
{ "login": "omerarshad", "id": 16164105, "node_id": "MDQ6VXNlcjE2MTY0MTA1", "avatar_url": "https://avatars.githubusercontent.com/u/16164105?v=4", "gravatar_id": "", "url": "https://api.github.com/users/omerarshad", "html_url": "https://github.com/omerarshad", "followers_url": "https://api.github.com/users/omerarshad/followers", "following_url": "https://api.github.com/users/omerarshad/following{/other_user}", "gists_url": "https://api.github.com/users/omerarshad/gists{/gist_id}", "starred_url": "https://api.github.com/users/omerarshad/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/omerarshad/subscriptions", "organizations_url": "https://api.github.com/users/omerarshad/orgs", "repos_url": "https://api.github.com/users/omerarshad/repos", "events_url": "https://api.github.com/users/omerarshad/events{/privacy}", "received_events_url": "https://api.github.com/users/omerarshad/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1834052847, "node_id": "MDU6TGFiZWwxODM0MDUyODQ3", "url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20LM%20(Finetuning)", "name": "Ex: LM (Finetuning)", "color": "26FFF8", "default": false, "description": "Related to language modeling fine-tuning" }, { "id": 1834053007, "node_id": "MDU6TGFiZWwxODM0MDUzMDA3", "url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20LM%20(Pretraining)", "name": "Ex: LM (Pretraining)", "color": "76FFAF", "default": false, "description": "Related to language modeling pre-training" } ]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "We still need to provide a good docstring/notebook for this. It's on our ToDo-List. :-) \r\n\r\nOr @sshleifer - is there already something for Bart? ", "Nothing yet, would be good to add!", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "I have seen the same [issue](https://github.com/pytorch/fairseq/issues/1899) in fairseq BART!.", "Hi, any news about bart pre-training? ", " who can tell me how to pre-train the bart on my own dataset? I am so confused ....\r\nthank you so much", "Maybe this comment can help: https://github.com/huggingface/transformers/issues/5096#issuecomment-645860271", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Any news on this please?\r\n", "not so far, would be great to have it. Thanks.", "I and my co-worker wrote a demo according to roberta pretraining demo.\r\n```\r\n#encoding=utf-8\r\n\r\nfrom transformers import (\r\n BartForConditionalGeneration, BartTokenizer, BartForCausalLM,\r\n Seq2SeqTrainingArguments, Seq2SeqTrainer\r\n )\r\n\r\nimport torch\r\nfrom torch.utils.data import random_split\r\n\r\n\r\n# ## Initiating model and trainer for training\r\nfrom transformers import BartModel, BartConfig\r\nfrom transformers import BartTokenizerFast\r\n\r\n\r\nconfiguration = BartConfig(\r\n vocab_size=52000,\r\n max_position_embeddings=258,\r\n d_model=256,\r\n encoder_layers=3,\r\n decoder_layers=3,\r\n encoder_attention_heads=4,\r\n decoder_attention_heads=4,\r\n decoder_ffn_dim=1024,\r\n encoder_ffn_dim=1024,\r\n)\r\nmodel = BartForCausalLM(configuration)\r\ntokenizer = BartTokenizerFast.from_pretrained(\"./dic\", max_len=256, additional_special_tokens=['[CH]', '[OTHER]', '[VAR]', '[NUM]'])\r\n\r\n\r\n# ### HTTP Request DataPreparing & Modeling\r\ndata = []\r\nwith open(\"../data/sample.txt\") as f1:\r\n for src in f1:\r\n data.append(\r\n {\r\n \"seq2seq\": {\r\n \"input\": src.strip()\r\n }\r\n }\r\n )\r\nprint(f'total size of data is {len(data)}')\r\n\r\n\r\n# splitting dataset into train, validation\r\nsplit = 0.2\r\ntrain_dataset, eval_dataset = random_split(data, lengths=[int((1-split)*len(data))+1, int(split*len(data))])\r\n\r\n\r\n# defining collator functioon for preparing batches on the fly ..\r\ndef data_collator(features:list):\r\n inputs = [f[\"seq2seq\"][\"input\"] for f in features]\r\n batch = tokenizer.prepare_seq2seq_batch(src_texts=inputs, max_length=256, padding='max_length')\r\n batch[\"labels\"] = batch[\"input_ids\"].copy()\r\n for k in batch:\r\n batch[k] = torch.tensor(batch[k])\r\n return batch\r\n\r\n\r\nbatch_out = data_collator(eval_dataset)\r\nprint(batch_out)\r\nprint(batch_out['input_ids'].shape,batch_out['labels'].shape,batch_out['attention_mask'].shape)\r\n\r\n\r\n# defining training related arguments\r\nargs = Seq2SeqTrainingArguments(output_dir=\"clm-checkpoints\",\r\n do_train=True,\r\n do_eval=True,\r\n evaluation_strategy=\"epoch\",\r\n per_device_train_batch_size=8,\r\n per_device_eval_batch_size=8,\r\n learning_rate=5e-5,\r\n num_train_epochs=1,\r\n logging_dir=\"./logs\")\r\n\r\n\r\n# defining trainer using 🤗\r\ntrainer = Seq2SeqTrainer(model=model, \r\n args=args, \r\n data_collator=data_collator, \r\n train_dataset=train_dataset, \r\n eval_dataset=eval_dataset)\r\n\r\n\r\n# ## Training time\r\ntrainer.train()\r\n# It will take hours to train this model on this dataset\r\n\r\n\r\n# lets save model\r\ntrainer.evaluate(eval_dataset=eval_dataset)\r\ntrainer.save_model(\"clm-checkpoints\")\r\n\r\n```", "> I and my co-worker wrote a demo according to roberta pretraining demo.\r\n> \r\n> ```\r\n> #encoding=utf-8\r\n> \r\n> from transformers import (\r\n> BartForConditionalGeneration, BartTokenizer, BartForCausalLM,\r\n> Seq2SeqTrainingArguments, Seq2SeqTrainer\r\n> )\r\n> \r\n> import torch\r\n> from torch.utils.data import random_split\r\n> \r\n> \r\n> # ## Initiating model and trainer for training\r\n> from transformers import BartModel, BartConfig\r\n> from transformers import BartTokenizerFast\r\n> \r\n> \r\n> configuration = BartConfig(\r\n> vocab_size=52000,\r\n> max_position_embeddings=258,\r\n> d_model=256,\r\n> encoder_layers=3,\r\n> decoder_layers=3,\r\n> encoder_attention_heads=4,\r\n> decoder_attention_heads=4,\r\n> decoder_ffn_dim=1024,\r\n> encoder_ffn_dim=1024,\r\n> )\r\n> model = BartForCausalLM(configuration)\r\n> tokenizer = BartTokenizerFast.from_pretrained(\"./dic\", max_len=256, additional_special_tokens=['[CH]', '[OTHER]', '[VAR]', '[NUM]'])\r\n> \r\n> \r\n> # ### HTTP Request DataPreparing & Modeling\r\n> data = []\r\n> with open(\"../data/sample.txt\") as f1:\r\n> for src in f1:\r\n> data.append(\r\n> {\r\n> \"seq2seq\": {\r\n> \"input\": src.strip()\r\n> }\r\n> }\r\n> )\r\n> print(f'total size of data is {len(data)}')\r\n> \r\n> \r\n> # splitting dataset into train, validation\r\n> split = 0.2\r\n> train_dataset, eval_dataset = random_split(data, lengths=[int((1-split)*len(data))+1, int(split*len(data))])\r\n> \r\n> \r\n> # defining collator functioon for preparing batches on the fly ..\r\n> def data_collator(features:list):\r\n> inputs = [f[\"seq2seq\"][\"input\"] for f in features]\r\n> batch = tokenizer.prepare_seq2seq_batch(src_texts=inputs, max_length=256, padding='max_length')\r\n> batch[\"labels\"] = batch[\"input_ids\"].copy()\r\n> for k in batch:\r\n> batch[k] = torch.tensor(batch[k])\r\n> return batch\r\n> \r\n> \r\n> batch_out = data_collator(eval_dataset)\r\n> print(batch_out)\r\n> print(batch_out['input_ids'].shape,batch_out['labels'].shape,batch_out['attention_mask'].shape)\r\n> \r\n> \r\n> # defining training related arguments\r\n> args = Seq2SeqTrainingArguments(output_dir=\"clm-checkpoints\",\r\n> do_train=True,\r\n> do_eval=True,\r\n> evaluation_strategy=\"epoch\",\r\n> per_device_train_batch_size=8,\r\n> per_device_eval_batch_size=8,\r\n> learning_rate=5e-5,\r\n> num_train_epochs=1,\r\n> logging_dir=\"./logs\")\r\n> \r\n> \r\n> # defining trainer using 🤗\r\n> trainer = Seq2SeqTrainer(model=model, \r\n> args=args, \r\n> data_collator=data_collator, \r\n> train_dataset=train_dataset, \r\n> eval_dataset=eval_dataset)\r\n> \r\n> \r\n> # ## Training time\r\n> trainer.train()\r\n> # It will take hours to train this model on this dataset\r\n> \r\n> \r\n> # lets save model\r\n> trainer.evaluate(eval_dataset=eval_dataset)\r\n> trainer.save_model(\"clm-checkpoints\")\r\n> ```\r\n\r\nThanks for the code example, I am also planning on implementing pretrained from scratch, and I've got several questions for the code\r\n- I noticed that you use pretrained bart tokenizer, how can I pretrain it for different language?\r\n- How much compute did you use for your implementation?", "> > I and my co-worker wrote a demo according to roberta pretraining demo.\r\n> > ```\r\n> > #encoding=utf-8\r\n> > \r\n> > from transformers import (\r\n> > BartForConditionalGeneration, BartTokenizer, BartForCausalLM,\r\n> > Seq2SeqTrainingArguments, Seq2SeqTrainer\r\n> > )\r\n> > \r\n> > import torch\r\n> > from torch.utils.data import random_split\r\n> > \r\n> > \r\n> > # ## Initiating model and trainer for training\r\n> > from transformers import BartModel, BartConfig\r\n> > from transformers import BartTokenizerFast\r\n> > \r\n> > \r\n> > configuration = BartConfig(\r\n> > vocab_size=52000,\r\n> > max_position_embeddings=258,\r\n> > d_model=256,\r\n> > encoder_layers=3,\r\n> > decoder_layers=3,\r\n> > encoder_attention_heads=4,\r\n> > decoder_attention_heads=4,\r\n> > decoder_ffn_dim=1024,\r\n> > encoder_ffn_dim=1024,\r\n> > )\r\n> > model = BartForCausalLM(configuration)\r\n> > tokenizer = BartTokenizerFast.from_pretrained(\"./dic\", max_len=256, additional_special_tokens=['[CH]', '[OTHER]', '[VAR]', '[NUM]'])\r\n> > \r\n> > \r\n> > # ### HTTP Request DataPreparing & Modeling\r\n> > data = []\r\n> > with open(\"../data/sample.txt\") as f1:\r\n> > for src in f1:\r\n> > data.append(\r\n> > {\r\n> > \"seq2seq\": {\r\n> > \"input\": src.strip()\r\n> > }\r\n> > }\r\n> > )\r\n> > print(f'total size of data is {len(data)}')\r\n> > \r\n> > \r\n> > # splitting dataset into train, validation\r\n> > split = 0.2\r\n> > train_dataset, eval_dataset = random_split(data, lengths=[int((1-split)*len(data))+1, int(split*len(data))])\r\n> > \r\n> > \r\n> > # defining collator functioon for preparing batches on the fly ..\r\n> > def data_collator(features:list):\r\n> > inputs = [f[\"seq2seq\"][\"input\"] for f in features]\r\n> > batch = tokenizer.prepare_seq2seq_batch(src_texts=inputs, max_length=256, padding='max_length')\r\n> > batch[\"labels\"] = batch[\"input_ids\"].copy()\r\n> > for k in batch:\r\n> > batch[k] = torch.tensor(batch[k])\r\n> > return batch\r\n> > \r\n> > \r\n> > batch_out = data_collator(eval_dataset)\r\n> > print(batch_out)\r\n> > print(batch_out['input_ids'].shape,batch_out['labels'].shape,batch_out['attention_mask'].shape)\r\n> > \r\n> > \r\n> > # defining training related arguments\r\n> > args = Seq2SeqTrainingArguments(output_dir=\"clm-checkpoints\",\r\n> > do_train=True,\r\n> > do_eval=True,\r\n> > evaluation_strategy=\"epoch\",\r\n> > per_device_train_batch_size=8,\r\n> > per_device_eval_batch_size=8,\r\n> > learning_rate=5e-5,\r\n> > num_train_epochs=1,\r\n> > logging_dir=\"./logs\")\r\n> > \r\n> > \r\n> > # defining trainer using 🤗\r\n> > trainer = Seq2SeqTrainer(model=model, \r\n> > args=args, \r\n> > data_collator=data_collator, \r\n> > train_dataset=train_dataset, \r\n> > eval_dataset=eval_dataset)\r\n> > \r\n> > \r\n> > # ## Training time\r\n> > trainer.train()\r\n> > # It will take hours to train this model on this dataset\r\n> > \r\n> > \r\n> > # lets save model\r\n> > trainer.evaluate(eval_dataset=eval_dataset)\r\n> > trainer.save_model(\"clm-checkpoints\")\r\n> > ```\r\n> \r\n> Thanks for the code example, I am also planning on implementing pretrained from scratch, and I've got several questions for the code\r\n> \r\n> * I noticed that you use pretrained bart tokenizer, how can I pretrain it for different language?\r\n> * How much compute did you use for your implementation?\r\n\r\n\r\n\r\nFor the first question, just like this:\r\n\r\n```\r\nfrom tokenizers import (ByteLevelBPETokenizer,SentencePieceBPETokenizer,BertWordPieceTokenizer)\r\n\r\ntokenizer = ByteLevelBPETokenizer()\r\npaths = ['./data/corpus.txt']\r\ntokenizer.train(files=paths, vocab_size = 15000, min_frequency=6, special_tokens = [\"[UNK]\", \"[CLS]\", \"[SEP]\", \"[PAD]\", \"[MASK]\"])\r\n\r\ntokenizer.save_model(\"./data/dic/\")\r\n```\r\n\r\nFor the other question, i trained it with 12G gpu memory,but it may be completed with samller gpu memory. And also you could adjust you parameters to your server environment.", "@myechona Thanks for your code. I have a question about it. There are some tasks like text-filling and sentence-permutation during pretrain stage, i want to know whether the \"input_ids\" is for masked sentence and the \"labels\" is for origin sentence?", "If anyone wants to train their MBART model then feel free to use this.\r\nhttps://github.com/prajdabre/yanmtt\r\n\r\nContributions are welcome!", "> I and my co-worker wrote a demo according to roberta pretraining demo.\r\n> \r\n> ```\r\n> #encoding=utf-8\r\n> \r\n> from transformers import (\r\n> BartForConditionalGeneration, BartTokenizer, BartForCausalLM,\r\n> Seq2SeqTrainingArguments, Seq2SeqTrainer\r\n> )\r\n> \r\n> import torch\r\n> from torch.utils.data import random_split\r\n> \r\n> \r\n> # ## Initiating model and trainer for training\r\n> from transformers import BartModel, BartConfig\r\n> from transformers import BartTokenizerFast\r\n> \r\n> \r\n> configuration = BartConfig(\r\n> vocab_size=52000,\r\n> max_position_embeddings=258,\r\n> d_model=256,\r\n> encoder_layers=3,\r\n> decoder_layers=3,\r\n> encoder_attention_heads=4,\r\n> decoder_attention_heads=4,\r\n> decoder_ffn_dim=1024,\r\n> encoder_ffn_dim=1024,\r\n> )\r\n> model = BartForCausalLM(configuration)\r\n> tokenizer = BartTokenizerFast.from_pretrained(\"./dic\", max_len=256, additional_special_tokens=['[CH]', '[OTHER]', '[VAR]', '[NUM]'])\r\n> \r\n> \r\n> # ### HTTP Request DataPreparing & Modeling\r\n> data = []\r\n> with open(\"../data/sample.txt\") as f1:\r\n> for src in f1:\r\n> data.append(\r\n> {\r\n> \"seq2seq\": {\r\n> \"input\": src.strip()\r\n> }\r\n> }\r\n> )\r\n> print(f'total size of data is {len(data)}')\r\n> \r\n> \r\n> # splitting dataset into train, validation\r\n> split = 0.2\r\n> train_dataset, eval_dataset = random_split(data, lengths=[int((1-split)*len(data))+1, int(split*len(data))])\r\n> \r\n> \r\n> # defining collator functioon for preparing batches on the fly ..\r\n> def data_collator(features:list):\r\n> inputs = [f[\"seq2seq\"][\"input\"] for f in features]\r\n> batch = tokenizer.prepare_seq2seq_batch(src_texts=inputs, max_length=256, padding='max_length')\r\n> batch[\"labels\"] = batch[\"input_ids\"].copy()\r\n> for k in batch:\r\n> batch[k] = torch.tensor(batch[k])\r\n> return batch\r\n> \r\n> \r\n> batch_out = data_collator(eval_dataset)\r\n> print(batch_out)\r\n> print(batch_out['input_ids'].shape,batch_out['labels'].shape,batch_out['attention_mask'].shape)\r\n> \r\n> \r\n> # defining training related arguments\r\n> args = Seq2SeqTrainingArguments(output_dir=\"clm-checkpoints\",\r\n> do_train=True,\r\n> do_eval=True,\r\n> evaluation_strategy=\"epoch\",\r\n> per_device_train_batch_size=8,\r\n> per_device_eval_batch_size=8,\r\n> learning_rate=5e-5,\r\n> num_train_epochs=1,\r\n> logging_dir=\"./logs\")\r\n> \r\n> \r\n> # defining trainer using 🤗\r\n> trainer = Seq2SeqTrainer(model=model, \r\n> args=args, \r\n> data_collator=data_collator, \r\n> train_dataset=train_dataset, \r\n> eval_dataset=eval_dataset)\r\n> \r\n> \r\n> # ## Training time\r\n> trainer.train()\r\n> # It will take hours to train this model on this dataset\r\n> \r\n> \r\n> # lets save model\r\n> trainer.evaluate(eval_dataset=eval_dataset)\r\n> trainer.save_model(\"clm-checkpoints\")\r\n> ```\r\n\r\nThanks for your code, it really helps.", "I'm most interested in sentence infilling, which this script doesn't really seem to address (though my understanding was that BART training generally involves masking and permutation). Is there an additional step I need to add for the infilling functionality?", "\r\n> We still need to provide a good docstring/notebook for this. It's on our ToDo-List. :-)\r\n> \r\n> Or @sshleifer - is there already something for Bart?\r\n\r\nHi, any update on this? @vanpelt \r\n", "I actually decided to jump over to T5 and use the `run_t5_mlm_flax.py` script. Seems to be working so far, though it's very new, so missing some conveniences.... it sounds like that stuff is underway!", "> I actually decided to jump over to T5 and use the `run_t5_mlm_flax.py` script. Seems to be working so far, though it's very new, so missing some conveniences.... it sounds like that stuff is underway!\r\n\r\n\r\nGreat, I was initially looking at those scripts to get some ideas about the pre-training script, but since then thought the Huggingface guys might have come up with a resource to do this. Apparently, it's still underway! :)\r\n\r\n", "We've released [nanoT5](https://github.com/PiotrNawrot/nanoT5) that reproduces T5-model (similar to BART) pre-training in PyTorch (not Flax). \r\n\r\nYou can take a look! \r\n\r\nAny suggestions are more than welcome." ]
1,588
1,678
1,602
NONE
null
How to pre-train BART model in an unsupervised manner. any example?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4151/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4151/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4150
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4150/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4150/comments
https://api.github.com/repos/huggingface/transformers/issues/4150/events
https://github.com/huggingface/transformers/issues/4150
612,421,653
MDU6SXNzdWU2MTI0MjE2NTM=
4,150
Config File
{ "login": "ghost", "id": 10137, "node_id": "MDQ6VXNlcjEwMTM3", "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghost", "html_url": "https://github.com/ghost", "followers_url": "https://api.github.com/users/ghost/followers", "following_url": "https://api.github.com/users/ghost/following{/other_user}", "gists_url": "https://api.github.com/users/ghost/gists{/gist_id}", "starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghost/subscriptions", "organizations_url": "https://api.github.com/users/ghost/orgs", "repos_url": "https://api.github.com/users/ghost/repos", "events_url": "https://api.github.com/users/ghost/events{/privacy}", "received_events_url": "https://api.github.com/users/ghost/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "You get the default config file as written on AWS with \r\n```python \r\nconfig = T5Config.from_pretrained(\"t5-small\")\r\n```\r\n\r\nIf you want to change anything on the config after you can update it with a dict of your specifications:\r\n```python \r\nconfig.update(your_dict)\r\n```", "Hello @patrickvonplaten !\r\nI have tried to update the config file using new dict \r\n```\r\nnew_dict={ \"d_ff\": 2048,\r\n \"d_kv\": 64,\r\n \"d_model\": 512,\r\n \"dropout_rate\": 0.1,\r\n \"eos_token_id\": 1,\r\n \"feed_forward_proj\": \"relu\",\r\n \"initializer_factor\": 1.0,\r\n \"is_encoder_decoder\": True,\r\n \"layer_norm_epsilon\": 1e-06,\r\n \"model_type\": \"t5\",\r\n \"num_decoder_layers\": 6,\r\n \"num_heads\": 8,\r\n \"num_layers\": 6,\r\n \"pad_token_id\": 0,\r\n \"relative_attention_num_buckets\": 32,\r\n \"transformers_version\": \"4.5.1\",\r\n \"use_cache\": False,\r\n \"vocab_size\": 32128,\r\n \"num_input_sent\": 4, # TODO: ADDED\r\n \"max_source_len \": 25, # TODO: ADDED\r\n \"max_target_len\": 25, # TODO: ADDED\r\n \r\n }\r\n```\r\nprint(onfig.d_ff) gives 2048\r\n\r\nbut: \r\nprint(max_source_len) gives me an error \"AttributeError: 'T5Config' object has no attribute 'max_source_len'\"\r\n\r\nhow to use the new added configurations after they are added?", "Hey @Arij-Aladel, \r\n\r\nWhat would be the purpose of adding parameters to the config, such as `max_target_len` that are not used in the corresponding model `T5Model`?", "@patrickvonplaten I am trying to build my model depending on t5 hugging face API and I need to build my own config. Whatever the purpose is , the question still what is th benefit of updating the parameters if I can not reuse them?", "Ok thanks I have solved it it was my bad, so sorry! every thing works well" ]
1,588
1,622
1,588
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> Hi everybody, is there a way to automatically obtain the configuration file for pre-trained TensorFlow T5 network? Specifically the config.json file which I want to use to convert a tf model into pytorch using the script provided by HuggingFace. Just to be more clear, I want something like that: https://s3.amazonaws.com/models.huggingface.co/bert/t5-small-config.json, with the specifications for my model. Thanks in advance :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4150/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4150/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4149
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4149/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4149/comments
https://api.github.com/repos/huggingface/transformers/issues/4149/events
https://github.com/huggingface/transformers/issues/4149
612,255,996
MDU6SXNzdWU2MTIyNTU5OTY=
4,149
Fine-tuning T5 in Tensorflow
{ "login": "tqdo", "id": 53948469, "node_id": "MDQ6VXNlcjUzOTQ4NDY5", "avatar_url": "https://avatars.githubusercontent.com/u/53948469?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tqdo", "html_url": "https://github.com/tqdo", "followers_url": "https://api.github.com/users/tqdo/followers", "following_url": "https://api.github.com/users/tqdo/following{/other_user}", "gists_url": "https://api.github.com/users/tqdo/gists{/gist_id}", "starred_url": "https://api.github.com/users/tqdo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tqdo/subscriptions", "organizations_url": "https://api.github.com/users/tqdo/orgs", "repos_url": "https://api.github.com/users/tqdo/repos", "events_url": "https://api.github.com/users/tqdo/events{/privacy}", "received_events_url": "https://api.github.com/users/tqdo/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "Hi @tqdo, \r\n\r\nGood question. We will add better explanation for TF T5 soon :-) ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "@tqdo. I open sourced fine-tuning T5 by customizing training loop in Tensorflow2.0+: https://github.com/wangcongcong123/ttt\r\nHope this helps.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,588
1,605
1,605
NONE
null
I wonder whether there is an example/tutorial about how to fine-tune T5 for a particular task (say translation) in Tensorflow. On the doc website it says that `This model is a tf.keras.Model tf.keras.Model sub-class`. However I am not really sure what the output is for the Keras model. It seems like the output (the target sequence, in the case of translation) are considered input for the T5 model as well. Thank you in advance.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4149/reactions", "total_count": 3, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4149/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4148
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4148/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4148/comments
https://api.github.com/repos/huggingface/transformers/issues/4148/events
https://github.com/huggingface/transformers/issues/4148
612,231,472
MDU6SXNzdWU2MTIyMzE0NzI=
4,148
DistilBertForQuestionAnswering returns [UNK]
{ "login": "dimwael", "id": 32783348, "node_id": "MDQ6VXNlcjMyNzgzMzQ4", "avatar_url": "https://avatars.githubusercontent.com/u/32783348?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dimwael", "html_url": "https://github.com/dimwael", "followers_url": "https://api.github.com/users/dimwael/followers", "following_url": "https://api.github.com/users/dimwael/following{/other_user}", "gists_url": "https://api.github.com/users/dimwael/gists{/gist_id}", "starred_url": "https://api.github.com/users/dimwael/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dimwael/subscriptions", "organizations_url": "https://api.github.com/users/dimwael/orgs", "repos_url": "https://api.github.com/users/dimwael/repos", "events_url": "https://api.github.com/users/dimwael/events{/privacy}", "received_events_url": "https://api.github.com/users/dimwael/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Solved : Had to update transformers & torch\r\nthis code is working fine :\r\n```python\r\nfrom transformers import DistilBertTokenizer, DistilBertForQuestionAnswering\r\nimport torch\r\n\r\ntokenizer = DistilBertTokenizer.from_pretrained(\r\n 'distilbert-base-uncased-distilled-squad')\r\nmodel = DistilBertForQuestionAnswering.from_pretrained(\r\n 'distilbert-base-uncased-distilled-squad')\r\ninput_ids = torch.tensor(tokenizer.encode(\r\n question, corpus, max_length=256, add_special_tokens=True)).unsqueeze(0) # Batch size 1\r\nstart_positions = torch.tensor([1])\r\nend_positions = torch.tensor([3])\r\n\r\noutputs = model(input_ids, start_positions=start_positions,\r\n end_positions=end_positions)\r\nloss, start_scores, end_scores = outputs[:3]\r\n\r\nall_tokens = tokenizer.convert_ids_to_tokens(input_ids[0])\r\nanswer = tokenizer.convert_tokens_to_string(\r\n all_tokens[torch.argmax(start_scores): torch.argmax(end_scores) + 1])\r\nprint('*************')\r\nprint(answer)\r\n```" ]
1,588
1,588
1,588
NONE
null
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...):Distilbert Language I am using the model on (English, Chinese ...):English The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) The tasks I am working on is: * [X] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Distilber for question answering is return a sentence of [UNK] Steps to reproduce the behavior: Code snippet I am using : ```python import torch from transformers import DistilBertTokenizer, DistilBertForQuestionAnswering tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased-distilled-squad') model = DistilBertForQuestionAnswering.from_pretrained('distilbert-base-uncased-distilled-squad') torch_device = "cuda" if torch.cuda.is_available() else "cpu" def answer(question, text): input_dict = tokenizer.encode_plus( question, text, return_tensors='pt', max_length=512) input_ids = input_dict["input_ids"].to('cpu') start_scores, end_scores = model(input_ids) start = torch.argmax(start_scores) end = torch.argmax(end_scores) all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0]) answer = ''.join(all_tokens[start: end + 1]).replace('▁', ' ').strip() answer = answer.replace('[SEP]', '') return answer if answer != '[CLS]' and len(answer) != 0 else 'could not find an answer' ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Using the Albert is giving a good results, but DistilBert tokenizer is not working. <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.2.1 - Platform: Linux 18.04 - Python version: 3.6.9 - PyTorch version (GPU?): pytorch-pretrained-bert==0.6.2 - Tensorflow version (GPU?):NO - Using GPU in script?: No - Using distributed or parallel set-up in script?: No
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4148/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4148/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4147
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4147/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4147/comments
https://api.github.com/repos/huggingface/transformers/issues/4147/events
https://github.com/huggingface/transformers/issues/4147
612,074,903
MDU6SXNzdWU2MTIwNzQ5MDM=
4,147
Error in Calculating Sentence Perplexity for GPT-2 model
{ "login": "states786", "id": 64096105, "node_id": "MDQ6VXNlcjY0MDk2MTA1", "avatar_url": "https://avatars.githubusercontent.com/u/64096105?v=4", "gravatar_id": "", "url": "https://api.github.com/users/states786", "html_url": "https://github.com/states786", "followers_url": "https://api.github.com/users/states786/followers", "following_url": "https://api.github.com/users/states786/following{/other_user}", "gists_url": "https://api.github.com/users/states786/gists{/gist_id}", "starred_url": "https://api.github.com/users/states786/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/states786/subscriptions", "organizations_url": "https://api.github.com/users/states786/orgs", "repos_url": "https://api.github.com/users/states786/repos", "events_url": "https://api.github.com/users/states786/events{/privacy}", "received_events_url": "https://api.github.com/users/states786/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "The longest input length a pretrained GPT2 model can treat depends on its `n_position` value. You can look it up here e.g. https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-config.json . \r\nIf you use a pretrained-model you sadly can only treat sequences <= 1024.\r\n\r\nFor you own model you can increase `n_position` and retrain the longer position encoding matrix this way. \r\n\r\nIf you are just interested in the perplexity you could also simply cut the input_ids into smaller input_ids and average the loss over them. It will not exactly be the same, but a good approximation.", "Is it being calculated in the same way for the evaluation of training on validation set?\r\n\r\nSecondly, if we calculate perplexity of all the individual sentences from corpus \"xyz\" and take average perplexity of these sentences? will it be the same by calculating the perplexity of the whole corpus by using parameter \"eval_data_file\" in language model script?", "1) Yes \r\n\r\n2) No -> since you don't take into account the probability `p(first_token_sentence_2 | last_token_sentence_1)`, but it will be a very good approximation.\r\n\r\nHope this answers your question" ]
1,588
1,591
1,591
NONE
null
Hi, I am using a following code to calculate the perplexity of sentences on my GPT-2 pretrained model: ``` tokenizer = GPT2Tokenizer.from_pretrained('gpt-model') config = GPT2Config.from_pretrained('gpt-model') model = GPT2LMHeadModel.from_pretrained('gpt-model', config=config) model.eval() def calculatePerplexity(sentence,model,tokenizer): input_ids = torch.tensor(tokenizer.encode(sentence)).unsqueeze(0) input_ids = input_ids.to('cpu') with torch.no_grad(): outputs = model(input_ids, labels=input_ids) loss, logits = outputs[:2] return math.exp(loss) ``` For some of the sentences from my testing corpus, I am getting following error: **Token indices sequence length is longer than the specified maximum sequence length for this model (1140 > 1024). Running this sequence through the model will result in indexing errors** How can I resolve this error? Kindly advise.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4147/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4147/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4146
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4146/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4146/comments
https://api.github.com/repos/huggingface/transformers/issues/4146/events
https://github.com/huggingface/transformers/pull/4146
611,968,909
MDExOlB1bGxSZXF1ZXN0NDEzMDI1OTYw
4,146
Tpu trainer
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4146?src=pr&el=h1) Report\n> Merging [#4146](https://codecov.io/gh/huggingface/transformers/pull/4146?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d713cfc5ebfb1ed83de1fce55dd7279f9db30672&el=desc) will **decrease** coverage by `0.08%`.\n> The diff coverage is `33.33%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4146/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4146?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4146 +/- ##\n==========================================\n- Coverage 78.84% 78.76% -0.09% \n==========================================\n Files 114 114 \n Lines 18688 18729 +41 \n==========================================\n+ Hits 14735 14751 +16 \n- Misses 3953 3978 +25 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4146?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4146/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `42.27% <25.58%> (-1.64%)` | :arrow_down: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/4146/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `84.21% <63.63%> (-3.49%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4146/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.77% <0.00%> (+0.16%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4146?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4146?src=pr&el=footer). Last update [d713cfc...e882471](https://codecov.io/gh/huggingface/transformers/pull/4146?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Awesome!" ]
1,588
1,588
1,588
MEMBER
null
This PR aims to bring TPU support to the trainer. It runs on GLUE/MRPC but is yet untested on others. Left to do: - [ ] Saving and reloading mid-training - [ ] Check it runs on a few examples (Language modeling, NER, other GLUE tasks) - [ ] Write down training speed-ups - [ ] Write down evaluation speed-ups The API is not final.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4146/reactions", "total_count": 4, "+1": 0, "-1": 0, "laugh": 0, "hooray": 3, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4146/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4146", "html_url": "https://github.com/huggingface/transformers/pull/4146", "diff_url": "https://github.com/huggingface/transformers/pull/4146.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4146.patch", "merged_at": 1588862045000 }
https://api.github.com/repos/huggingface/transformers/issues/4145
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4145/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4145/comments
https://api.github.com/repos/huggingface/transformers/issues/4145/events
https://github.com/huggingface/transformers/issues/4145
611,697,237
MDU6SXNzdWU2MTE2OTcyMzc=
4,145
tokenizer.batch_encoder_plus do not return input_len
{ "login": "max-yue", "id": 13486398, "node_id": "MDQ6VXNlcjEzNDg2Mzk4", "avatar_url": "https://avatars.githubusercontent.com/u/13486398?v=4", "gravatar_id": "", "url": "https://api.github.com/users/max-yue", "html_url": "https://github.com/max-yue", "followers_url": "https://api.github.com/users/max-yue/followers", "following_url": "https://api.github.com/users/max-yue/following{/other_user}", "gists_url": "https://api.github.com/users/max-yue/gists{/gist_id}", "starred_url": "https://api.github.com/users/max-yue/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/max-yue/subscriptions", "organizations_url": "https://api.github.com/users/max-yue/orgs", "repos_url": "https://api.github.com/users/max-yue/repos", "events_url": "https://api.github.com/users/max-yue/events{/privacy}", "received_events_url": "https://api.github.com/users/max-yue/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,588
1,594
1,594
CONTRIBUTOR
null
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> ``` batch_tokens = self.tokenizer.batch_encode_plus( [example.text_a for example in examples], max_length=max_seq_len, pad_to_max_length=True, ) for ex_id, (input_ids, segment_ids, input_mask) in enumerate(zip(batch_tokens["input_ids"], batch_tokens["token_type_ids"], batch_tokens["attention_mask"])): input_len = sum(1 for id in input_ids if id != self.tokenizer.pad_token_id) ``` if return input_len, it will be great. ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> ## Your contribution <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4145/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4145/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4144
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4144/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4144/comments
https://api.github.com/repos/huggingface/transformers/issues/4144/events
https://github.com/huggingface/transformers/issues/4144
611,695,328
MDU6SXNzdWU2MTE2OTUzMjg=
4,144
Use finetuned-BART large to do conditional generation
{ "login": "tuhinjubcse", "id": 3104771, "node_id": "MDQ6VXNlcjMxMDQ3NzE=", "avatar_url": "https://avatars.githubusercontent.com/u/3104771?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tuhinjubcse", "html_url": "https://github.com/tuhinjubcse", "followers_url": "https://api.github.com/users/tuhinjubcse/followers", "following_url": "https://api.github.com/users/tuhinjubcse/following{/other_user}", "gists_url": "https://api.github.com/users/tuhinjubcse/gists{/gist_id}", "starred_url": "https://api.github.com/users/tuhinjubcse/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tuhinjubcse/subscriptions", "organizations_url": "https://api.github.com/users/tuhinjubcse/orgs", "repos_url": "https://api.github.com/users/tuhinjubcse/repos", "events_url": "https://api.github.com/users/tuhinjubcse/events{/privacy}", "received_events_url": "https://api.github.com/users/tuhinjubcse/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "If you used pytorch-lightning for training then you can load the weights from checkpoint as follows\r\n```\r\nckpt = torch.load('./bart_sum/checkpointepoch=2.ckpt')\r\nmodel.load_state_dict(ckpt['state_dict'])\r\n```\r\n\r\nonce you load the weights this way then save the model using the `.save_pretrained` method so that next time you can load it using `.from_pretrained`", "@patil-suraj, I tried your suggestion on finetuned BART checkpoint; though this gives me the following error, P.S. Model and tokenizer used is \"bart-large\"\r\n`---------------------------------------------------------------------------\r\nRuntimeError Traceback (most recent call last)\r\n<ipython-input-10-d2e409f72e4e> in <module>()\r\n 1 ckpt = torch.load('./OUTPUT_DIR/checkpointcheckpoint_ckpt_epoch_2.ckpt')\r\n----> 2 model.load_state_dict(ckpt['state_dict'])\r\n\r\n/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in load_state_dict(self, state_dict, strict)\r\n 845 if len(error_msgs) > 0:\r\n 846 raise RuntimeError('Error(s) in loading state_dict for {}:\\n\\t{}'.format(\r\n--> 847 self.__class__.__name__, \"\\n\\t\".join(error_msgs)))\r\n 848 return _IncompatibleKeys(missing_keys, unexpected_keys)\r\n 849 \r\n\r\nRuntimeError: Error(s) in loading state_dict for BartForConditionalGeneration:\r\n\tMissing key(s) in state_dict: \"final_logits_bias\", \"model.shared.weight\", \"model.encoder.embed_tokens.weight\", \"model.encoder.embed_positions.weight\", \"model.encoder.layers.0.self_attn.k_proj.weight\", \"model.encoder.layers.0.self_attn.k_proj.bias\", \"model.encoder.layers.0.self_attn.v_proj.weight\", \"model.encoder.layers.0.self_attn.v_proj.bias\", \"model.encoder.layers.0.self_attn.q_proj.weight\", \"model.encoder.layers.0.self_attn.q_proj.bias\", \"model.encoder.layers.0.self_attn.out_proj.weight\", \"model.encoder.layers.0.self_attn.out_proj.bias\", \"model.encoder.layers.0.self_attn_layer_norm.weight\", \"model.encoder.layers.0.self_attn_layer_norm.bias\", \"model.encoder.layers.0.fc1.weight\", \"model.encoder.layers.0.fc1.bias\", \"model.encoder.layers.0.fc2.weight\", \"model.encoder.layers.0.fc2.bias\", \"model.encoder.layers.0.final_layer_norm.weight\", \"model.encoder.layers.0.final_layer_norm.bias\", \"model.encoder.layers.1.self_attn.k_proj.weight\", \"model.encoder.layers.1.self_attn.k_proj.bias\", \"model.encoder.layers.1.self_attn.v_proj.weight\", \"model.encoder.layers.1.self_attn.v_proj.bias\", \"model.encoder.layers.1.self_attn.q_proj.weight\", \"model.encoder.layers.1.self_attn.q_proj.bias\", \"model.encoder.layers.1.self_attn.out_proj.weight\", \"model.encoder.layers.1.self_attn.out_proj.bias\", \"model.encoder.layers.1.self_attn_layer_norm.weight\", \"model.encoder.layers.1.self_attn_layer_norm.bias\", \"model.encoder.layers.1.fc1.weight\", \"model.encoder.layers.1.fc1.bias\", \"model.encoder.layers.1.fc2...\r\n\tUnexpected key(s) in state_dict: \"model.final_logits_bias\", \"model.model.shared.weight\", \"model.model.encoder.embed_tokens.weight\", \"model.model.encoder.embed_positions.weight\", \"model.model.encoder.layers.0.self_attn.k_proj.weight\", \"model.model.encoder.layers.0.self_attn.k_proj.bias\", \"model.model.encoder.layers.0.self_attn.v_proj.weight\", \"model.model.encoder.layers.0.self_attn.v_proj.bias\", \"model.model.encoder.layers.0.self_attn.q_proj.weight\", \"model.model.encoder.layers.0.self_attn.q_proj.bias\", \"model.model.encoder.layers.0.self_attn.out_proj.weight\", \"model.model.encoder.layers.0.self_attn.out_proj.bias\", \"model.model.encoder.layers.0.self_attn_layer_norm.weight\", \"model.model.encoder.layers.0.self_attn_layer_norm.bias\", \"model.model.encoder.layers.0.fc1.weight\", \"model.model.encoder.layers.0.fc1.bias\", \"model.model.encoder.layers.0.fc2.weight\", \"model.model.encoder.layers.0.fc2.bias\", \"model.model.encoder.layers.0.final_layer_norm.weight\", \"model.model.encoder.layers.0.final_layer_norm.bias\", \"model.model.encoder.layers.1.self_attn.k_proj.weight\", \"model.model.encoder.layers.1.self_attn.k_proj.bias\", \"model.model.encoder.layers.1.self_attn.v_proj.weight\", \"model.model.encoder.layers.1.self_attn.v_proj.bias\", \"model.model.encoder.layers.1.self_attn.q_proj.weight\", \"model.model.encoder.layers.1.self_attn.q_proj.bias\", \"model.model.encoder.layers.1.self_attn.out_proj.weight\", \"model.model.encoder.layers.1.self_attn.out_proj.bias\", \"model.model.encoder.layers.1.self... `\r\n\r\n\r\nPlease let me know how to tackle this?", "@pranavpawar3 \r\nhere `model` should be an instance of the `LighteningModule`. Initialize the `LighteningModule`, then you'll be able to do it this way\r\n\r\n```\r\nckpt = torch.load('./OUTPUT_DIR/checkpointcheckpoint_ckpt_epoch_2.ckpt')\r\nmodel.load_state_dict(ckpt['state_dict'])\r\n\r\n# save the inner pretrained model\r\nmodel.model.save_pretrained('model_dir')\r\n\r\n# then you can load it using BartForConditionalGeneration\r\nBartForConditionalGeneration.from_pretrained('model_dir')\r\n```", "@patil-suraj Initiating model as LighteningModule instance worked, Thanks!!", "@pranavpawar3 can I ask you to share how you initialized the LightningModule instance to make it compatible with the model you fine-tuned based on the pretrained bart-large model? I'm having the same issue. thanks!", "@patil-suraj could you please show how to Initialize model as LighteningModule instance. Have the same problem with loading finetuned bart ckpt. Thanks in advance!", "@sshleifer thanks for the link, meanwhile i managed to do what i wanted.\r\nanyway will be glad to see further improvements for summarisation tasks.\r\n\r\nfor those who finetuned BART model with finetune_bart.sh and wants to load it in pytorch, the next thing worked for me.\r\n\r\n```\r\nclass BartModel(pl.LightningModule):\r\n def __init__(self):\r\n super().__init__()\r\n self.model = BartForConditionalGeneration.from_pretrained('facebook/bart-large') \r\n \r\n def forward(self):\r\n pass\r\n \r\nckpt = torch.load('./bart_sum/checkpointepoch=1.ckpt')\r\n \r\nbart_model = BartModel()\r\nbart_model.load_state_dict(ckpt['state_dict'])\r\nbart_model.model.save_pretrained(\"working_dir\")\r\n``` \r\n", "Just merged a bunch of changes to the summarization finetuning code. Long description [here] https://github.com/huggingface/transformers/pull/4951. \r\nWould love it if somebody could take the new README/code for a spin!\r\n\r\nSome improvements (sorry to repeat myself):\r\n- you can finetune bart a lot faster with `--freeze_encoder` and `--freeze_embeds`.\r\n- you can collaborate with the community on hyperparams/modifications for the XSUM task using `--logger wandb_shared`\r\n- upgrade to pytorch_lightning==0.7.6\r\n- You get a huggingface style checkpoint associated with the `.ckpt` checkpoint using the new rouge2 based model checkpoint.\r\n- Rouge (the canonical summarization metric) is calculated at every val step, this is slow. So you can use `--val_check_interval 0.1 --n_val 500` to compute rouge more frequently on a subset of the validation set. \r\n\r\nIt's probably not perfect at the moment, so I'd love to know if anything is confusing or broken, either here or in a new issue :)\r\nThanks!\r\n\r\n\r\n\r\n\r\n\r\n", "@sshleifer hi, i checked changes.\r\n\r\nIt works well, thanks for automatic model saving to pytorch format.\r\n\r\nAlso good to see tips in readme to use bart-large-xsum for short summaries, i tried it with my dataset instead of bart-large and this improved my score!\r\n\r\nAre there any tips for t5? What type of t5 model is better for short summaries?", "> @sshleifer thanks for the link, meanwhile i managed to do what i wanted. anyway will be glad to see further improvements for summarisation tasks.\r\n> \r\n> for those who finetuned BART model with finetune_bart.sh and wants to load it in pytorch, the next thing worked for me.\r\n> \r\n> ```\r\n> class BartModel(pl.LightningModule):\r\n> def __init__(self):\r\n> super().__init__()\r\n> self.model = BartForConditionalGeneration.from_pretrained('facebook/bart-large') \r\n> \r\n> def forward(self):\r\n> pass\r\n> \r\n> ckpt = torch.load('./bart_sum/checkpointepoch=1.ckpt')\r\n> \r\n> bart_model = BartModel()\r\n> bart_model.load_state_dict(ckpt['state_dict'])\r\n> bart_model.model.save_pretrained(\"working_dir\")\r\n> ```\r\n\r\nThx :D. Answer was helpful" ]
1,588
1,639
1,592
NONE
null
Hi I am using a slightly old tag of ur repo where BART had run_bart_sum.py. I finetuned bart-large on a custom data set and want to do conditional generation ``` from transformers import BartTokenizer, BartForConditionalGeneration import torch model = BartForConditionalGeneration.from_pretrained('bart-large') tokenizer = BartTokenizer.from_pretrained('bart-large') ARTICLE_TO_SUMMARIZE = "President Donald Trump's senior adviser and son-in-law, Jared Kushner, praised the administration's response to the coronavirus pandemic as a \"great success story\" on Wednesday -- less than a day after the number of confirmed coronavirus cases in the United States topped 1 million. Kushner painted a rosy picture for \"Fox and Friends\" Wednesday morning, saying that \"the federal government rose to the challenge and this is a great success story and I think that that's really what needs to be told.\"" # model = BartForConditionalGeneration.from_pretrained('./bart_sum/checkpointepoch=2.ckpt') # tokenizer = BartTokenizer.from_pretrained('./bart_sum/checkpointepoch=2.ckpt') model = BartForConditionalGeneration.from_pretrained('bart-large') tokenizer = BartTokenizer.from_pretrained('bart-large') state = torch.load('./bart_sum/checkpointepoch=2.ckpt',map_location='cpu') model.load_state_dict(state) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) model.eval() inputs = tokenizer.batch_encode_plus( [ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors='pt') summary_ids = model.generate( inputs['input_ids'], num_beams=1, max_length=512, early_stopping=True) print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids]) ``` I tried both loading the finetuned checkpoint directly as well as loading bart-large and setting state dict For former it gives me ``` Traceback (most recent call last): File "generate.py", line 10, in <module> model = BartForConditionalGeneration.from_pretrained('./bart_sum/checkpointepoch=2.ckpt') File "/datastor/Softwarez/miniconda3/lib/python3.7/site-packages/transformers/modeling_utils.py", line 438, in from_pretrained **kwargs, File "/datastor/Softwarez/miniconda3/lib/python3.7/site-packages/transformers/configuration_utils.py", line 200, in from_pretrained config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs) File "/datastor/Softwarez/miniconda3/lib/python3.7/site-packages/transformers/configuration_utils.py", line 252, in get_config_dict config_dict = cls._dict_from_json_file(resolved_config_file) File "/datastor/Softwarez/miniconda3/lib/python3.7/site-packages/transformers/configuration_utils.py", line 344, in _dict_from_json_file text = reader.read() File "/datastor/Softwarez/miniconda3/lib/python3.7/codecs.py", line 322, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte ``` For latter ` Unexpected key(s) in state_dict: "epoch", "global_step", "checkpoint_callback_best", "optimizer_states", "lr_schedulers", "state_dict", "hparams", "hparams_type". `
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4144/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4144/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4143
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4143/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4143/comments
https://api.github.com/repos/huggingface/transformers/issues/4143/events
https://github.com/huggingface/transformers/pull/4143
611,687,198
MDExOlB1bGxSZXF1ZXN0NDEyODAxODM5
4,143
Camembert-large-fquad model card
{ "login": "mdhoffschmidt", "id": 6451662, "node_id": "MDQ6VXNlcjY0NTE2NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/6451662?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mdhoffschmidt", "html_url": "https://github.com/mdhoffschmidt", "followers_url": "https://api.github.com/users/mdhoffschmidt/followers", "following_url": "https://api.github.com/users/mdhoffschmidt/following{/other_user}", "gists_url": "https://api.github.com/users/mdhoffschmidt/gists{/gist_id}", "starred_url": "https://api.github.com/users/mdhoffschmidt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mdhoffschmidt/subscriptions", "organizations_url": "https://api.github.com/users/mdhoffschmidt/orgs", "repos_url": "https://api.github.com/users/mdhoffschmidt/repos", "events_url": "https://api.github.com/users/mdhoffschmidt/events{/privacy}", "received_events_url": "https://api.github.com/users/mdhoffschmidt/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,588
1,588
1,588
CONTRIBUTOR
null
Description for the model card describing the camembert-large-fquad model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4143/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4143/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4143", "html_url": "https://github.com/huggingface/transformers/pull/4143", "diff_url": "https://github.com/huggingface/transformers/pull/4143.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4143.patch", "merged_at": 1588776068000 }
https://api.github.com/repos/huggingface/transformers/issues/4142
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4142/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4142/comments
https://api.github.com/repos/huggingface/transformers/issues/4142/events
https://github.com/huggingface/transformers/issues/4142
611,674,051
MDU6SXNzdWU2MTE2NzQwNTE=
4,142
New model addition: Blender (Facebook chatbot)
{ "login": "trianxy", "id": 16118129, "node_id": "MDQ6VXNlcjE2MTE4MTI5", "avatar_url": "https://avatars.githubusercontent.com/u/16118129?v=4", "gravatar_id": "", "url": "https://api.github.com/users/trianxy", "html_url": "https://github.com/trianxy", "followers_url": "https://api.github.com/users/trianxy/followers", "following_url": "https://api.github.com/users/trianxy/following{/other_user}", "gists_url": "https://api.github.com/users/trianxy/gists{/gist_id}", "starred_url": "https://api.github.com/users/trianxy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/trianxy/subscriptions", "organizations_url": "https://api.github.com/users/trianxy/orgs", "repos_url": "https://api.github.com/users/trianxy/repos", "events_url": "https://api.github.com/users/trianxy/events{/privacy}", "received_events_url": "https://api.github.com/users/trianxy/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
closed
false
null
[]
[ "@tx1985 awesome idea", "Very Cool !", "+1", "Hey, \r\nI want to ask a question regarding Facebook Blender Chatbot. I want to train my custom data into blender. how can i do that? Please help", "Sorry, I know how to use blender as is but not how to fine tune it. I\nthink that you would require a significant amount of compute to do that !\n\nOn Fri, 5 Jun 2020 at 12:09, Nitesh Kumar <[email protected]> wrote:\n\n> Hey,\n> I want to ask a question regarding Facebook Blender Chatbot. I want to\n> train my custom data into blender. how can i do that? Please help\n>\n> —\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/4142#issuecomment-639415197>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/APESH7GRICJLNEEJIGRPN5DRVDG73ANCNFSM4MYRA2EQ>\n> .\n>\n", "Very cool!", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,588
1,598
1,598
NONE
null
# 🌟 New model addition ## Model description - Dialogue response generation model for multiturn conversations, open-sourced by Facebook (28 April 2020) - 3 models: 90m, 2.7b and 9.4b parameters - New state-of-the-art "in terms of engagingness and humanness measurements" - Blog post: https://parl.ai/projects/blender/ - Paper: https://arxiv.org/abs/2004.13637 - Code: Included in Facebook dialogue framework ParlAI https://github.com/facebookresearch/ParlAI ## Open source status * [ ] the model implementation is available: - possibly yes, together with the weights * [x] the model weights are available: - Need to install ParlAI (https://github.com/facebookresearch/ParlAI). Model weights are then downloaded when running `python parlai/scripts/safe_interactive.py -t blended_skill_talk -mf zoo:blender/blender_3B/model`. Compare https://github.com/facebookresearch/ParlAI/tree/master/parlai/zoo/blender * [x] who are the authors: (mention them, if possible by @gh-username) - Facebook AI Research: - Stephen Roller, Emily Dinan, Jason Weston - and Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott,Kurt Shuster, Eric M. Smith, Y-Lan Boureau,
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4142/reactions", "total_count": 33, "+1": 24, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 9 }
https://api.github.com/repos/huggingface/transformers/issues/4142/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4141
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4141/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4141/comments
https://api.github.com/repos/huggingface/transformers/issues/4141/events
https://github.com/huggingface/transformers/issues/4141
611,666,573
MDU6SXNzdWU2MTE2NjY1NzM=
4,141
🚀 An advice about changing variable name from "attention_mask" to "adder"
{ "login": "ianliuy", "id": 37043548, "node_id": "MDQ6VXNlcjM3MDQzNTQ4", "avatar_url": "https://avatars.githubusercontent.com/u/37043548?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ianliuy", "html_url": "https://github.com/ianliuy", "followers_url": "https://api.github.com/users/ianliuy/followers", "following_url": "https://api.github.com/users/ianliuy/following{/other_user}", "gists_url": "https://api.github.com/users/ianliuy/gists{/gist_id}", "starred_url": "https://api.github.com/users/ianliuy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ianliuy/subscriptions", "organizations_url": "https://api.github.com/users/ianliuy/orgs", "repos_url": "https://api.github.com/users/ianliuy/repos", "events_url": "https://api.github.com/users/ianliuy/events{/privacy}", "received_events_url": "https://api.github.com/users/ianliuy/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1990918270, "node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue", "name": "Good First Issue", "color": "bbf794", "default": false, "description": "" } ]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "I agree! Do you want to open a PR about this to change the naming? :-) \r\nWhen doing this we just have to be careful to not change the user facing api when doing this -> which means that ideally, we should not rename any function arguments of high level modules like `BertModel.forward()`.", "I've created PR #4566 for this issue", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,588
1,595
1,595
NONE
null
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> ## Motivation I noticed that some users are pretty confused when reading source codes about variable `attention_mask` like: [What is the meaning of Attention Mask #205](https://github.com/huggingface/transformers/issues/205) [Clarifying attention mask #542](https://github.com/huggingface/transformers/issues/542) And I refer to the origional BERT repository - [google-research/bert](https://github.com/google-research/bert). Compared to the origin, I find in this repo sometimes the concepts of `attention_mask` and `adder` are mixed. refering original BERT: [./modeling.py#L707](https://github.com/google-research/bert/blob/master/modeling.py#L707) ```python attention_mask = tf.expand_dims(attention_mask, axis=[1]) adder = (1.0 - tf.cast(attention_mask, tf.float32)) * -10000.0 attention_scores += adder ``` But in this repo: take [src/transformers/modeling_tf_openai.py#L282](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_tf_openai.py#L282) as an example: ```python attention_mask = attention_mask[:, tf.newaxis, tf.newaxis, :] attention_mask = tf.cast(attention_mask, tf.float32) attention_mask = (1.0 - attention_mask) * -10000.0 ``` and inside the method `TFAttention._attn()` [src/transformers/modeling_tf_openai.py#L112](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_tf_openai.py#L112): ```python if attention_mask is not None: # Apply the attention mask w = w + attention_mask ``` <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> ## Your contribution may be changing its name is way better? like: ```python attention_mask = attention_mask[:, tf.newaxis, tf.newaxis, :] attention_mask = tf.cast(attention_mask, tf.float32) adder = (1.0 - attention_mask) * -10000.0 ``` and then: ```python if adder is not None: # Apply the attention mask attention_score = w + adder ``` <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4141/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4141/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4140
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4140/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4140/comments
https://api.github.com/repos/huggingface/transformers/issues/4140/events
https://github.com/huggingface/transformers/issues/4140
611,636,973
MDU6SXNzdWU2MTE2MzY5NzM=
4,140
How to make run_language_modeling.py work for transformer-xl?
{ "login": "bajajahsaas", "id": 11806556, "node_id": "MDQ6VXNlcjExODA2NTU2", "avatar_url": "https://avatars.githubusercontent.com/u/11806556?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bajajahsaas", "html_url": "https://github.com/bajajahsaas", "followers_url": "https://api.github.com/users/bajajahsaas/followers", "following_url": "https://api.github.com/users/bajajahsaas/following{/other_user}", "gists_url": "https://api.github.com/users/bajajahsaas/gists{/gist_id}", "starred_url": "https://api.github.com/users/bajajahsaas/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bajajahsaas/subscriptions", "organizations_url": "https://api.github.com/users/bajajahsaas/orgs", "repos_url": "https://api.github.com/users/bajajahsaas/repos", "events_url": "https://api.github.com/users/bajajahsaas/events{/privacy}", "received_events_url": "https://api.github.com/users/bajajahsaas/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "Yeah we don't have a run_language_modeling.py script yet. It's on a TODO list :-). Also see: #2008 .\r\nIf you feel like writing one, it would be great :-) " ]
1,588
1,591
1,591
NONE
null
The code in [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py) does Language modeling for any pre-trained model. However, for transformer-xl, we need additional arguments to the forward function like `mems` and even [run_transfo_xl.py](https://github.com/huggingface/transformers/blob/master/examples/contrib/run_transfo_xl.py) performs `model.reset_length()` operation using `mem_len` and `tgt_len` as arguments. I am not sure how these functions are performed in this generic code for runningLM for any pre-trained model. Any suggestions will be helpful.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4140/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4140/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4139
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4139/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4139/comments
https://api.github.com/repos/huggingface/transformers/issues/4139/events
https://github.com/huggingface/transformers/issues/4139
611,635,474
MDU6SXNzdWU2MTE2MzU0NzQ=
4,139
Cannot set max_position_embeddings to any desired value in T5Config
{ "login": "Akashdesarda", "id": 26931751, "node_id": "MDQ6VXNlcjI2OTMxNzUx", "avatar_url": "https://avatars.githubusercontent.com/u/26931751?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Akashdesarda", "html_url": "https://github.com/Akashdesarda", "followers_url": "https://api.github.com/users/Akashdesarda/followers", "following_url": "https://api.github.com/users/Akashdesarda/following{/other_user}", "gists_url": "https://api.github.com/users/Akashdesarda/gists{/gist_id}", "starred_url": "https://api.github.com/users/Akashdesarda/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Akashdesarda/subscriptions", "organizations_url": "https://api.github.com/users/Akashdesarda/orgs", "repos_url": "https://api.github.com/users/Akashdesarda/repos", "events_url": "https://api.github.com/users/Akashdesarda/events{/privacy}", "received_events_url": "https://api.github.com/users/Akashdesarda/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "IMO the docstring is confusing here. This PR should fix it: #4422. \r\n& You are right you should change `max_position_embeddings` via `n_positions`." ]
1,588
1,589
1,589
NONE
null
# 🐛 Bug ## Information The model I am using (T5): Language I am using the model on (English): The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) ``` config = T5Config(max_position_embeddings=1024) ``` ## To reproduce Steps to reproduce the behaviour: 1.Initiate T5Config 2. set max_position_embeddings any desired value like 1024, 2048 (as mentioned in docs) This works perfectly fine with BertConfig This is an error: <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ``` Can't set max_position_embeddings with value 1024 for T5Config { "_num_labels": 2, "architectures": null, "bad_words_ids": null, "bos_token_id": null, "decoder_start_token_id": null, "do_sample": false, "early_stopping": false, "eos_token_id": 1, "finetuning_task": null, "id2label": { "0": "LABEL_0", "1": "LABEL_1" }, "is_decoder": false, "is_encoder_decoder": true, "label2id": { "LABEL_0": 0, "LABEL_1": 1 }, "length_penalty": 1.0, "max_length": 20, "min_length": 0, "model_type": "t5", "no_repeat_ngram_size": 0, "num_beams": 1, "num_return_sequences": 1, "output_attentions": false, "output_hidden_states": false, "output_past": true, "pad_token_id": 0, "prefix": null, "pruned_heads": {}, "repetition_penalty": 1.0, "task_specific_params": null, "temperature": 1.0, "top_k": 50, "top_p": 1.0, "torchscript": false, "use_bfloat16": false } --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-5-44823d82a38e> in <module>() ----> 1 config = T5Config(max_position_embeddings=1024) 2 frames /usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py in __init__(self, **kwargs) 106 for key, value in kwargs.items(): 107 try: --> 108 setattr(self, key, value) 109 except AttributeError as err: 110 logger.error("Can't set {} with value {} for {}".format(key, value, self)) AttributeError: can't set attribute ``` After further investigating in docs here is what I found (If I am wrong, then please correct me): ``` self.vocab_size = vocab_size self.n_positions = n_positions self.d_model = d_model self.d_kv = d_kv self.d_ff = d_ff self.num_layers = num_layers self.num_heads = num_heads self.relative_attention_num_buckets = relative_attention_num_buckets self.dropout_rate = dropout_rate self.layer_norm_epsilon = layer_norm_epsilon self.initializer_factor = initializer_factor @property def max_position_embeddings(self): return self.n_positions ``` Looking at code snippets I think we have to directly use n_positions instead of max_position_embeddings. I have tried this approach & **it worked**. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.8.0 - Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.5.0+cu101 (True) - Tensorflow version (GPU?): 2.2.0-rc3 (True) - Using GPU in script?: True
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4139/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4139/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4138
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4138/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4138/comments
https://api.github.com/repos/huggingface/transformers/issues/4138/events
https://github.com/huggingface/transformers/pull/4138
611,544,377
MDExOlB1bGxSZXF1ZXN0NDEyNjkyNjE0
4,138
[Roberta] fix hard wired pad token id
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I checked all models on `https://huggingface.co/models?search=roberta` and all of them have the pad_token_id set to `1` which was used as a default before. So this is good to merge for me." ]
1,588
1,588
1,588
MEMBER
null
This might break backward compatibility for users that have uploaded a roberta model and use a different pad token than 1 in the configs.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4138/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4138/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4138", "html_url": "https://github.com/huggingface/transformers/pull/4138", "diff_url": "https://github.com/huggingface/transformers/pull/4138.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4138.patch", "merged_at": 1588718555000 }
https://api.github.com/repos/huggingface/transformers/issues/4137
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4137/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4137/comments
https://api.github.com/repos/huggingface/transformers/issues/4137/events
https://github.com/huggingface/transformers/issues/4137
611,504,515
MDU6SXNzdWU2MTE1MDQ1MTU=
4,137
Error: `cannot import name 'TFBertForMaskedLM'`
{ "login": "danyaljj", "id": 2441454, "node_id": "MDQ6VXNlcjI0NDE0NTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2441454?v=4", "gravatar_id": "", "url": "https://api.github.com/users/danyaljj", "html_url": "https://github.com/danyaljj", "followers_url": "https://api.github.com/users/danyaljj/followers", "following_url": "https://api.github.com/users/danyaljj/following{/other_user}", "gists_url": "https://api.github.com/users/danyaljj/gists{/gist_id}", "starred_url": "https://api.github.com/users/danyaljj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/danyaljj/subscriptions", "organizations_url": "https://api.github.com/users/danyaljj/orgs", "repos_url": "https://api.github.com/users/danyaljj/repos", "events_url": "https://api.github.com/users/danyaljj/events{/privacy}", "received_events_url": "https://api.github.com/users/danyaljj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The issue was that I did not have TensorFlow installed. " ]
1,588
1,588
1,588
CONTRIBUTOR
null
When I try the following import: ```python from transformers import TFBertForMaskedLM ``` I get the following error: ``` Traceback (most recent call last): File "/Users/danielk/ideaProjects/farsi-language-models/src/try_bert_lm.py", line 3, in <module> from transformers import TFBertForMaskedLM ImportError: cannot import name 'TFBertForMaskedLM' from 'transformers' (/usr/local/lib/python3.7/site-packages/transformers/__init__.py) ``` My versions are: ``` torch 1.4.0 transformers 2.8.0 ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4137/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4137/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4136
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4136/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4136/comments
https://api.github.com/repos/huggingface/transformers/issues/4136/events
https://github.com/huggingface/transformers/issues/4136
611,497,873
MDU6SXNzdWU2MTE0OTc4NzM=
4,136
Cannot loading SpanBERT pre-trained model
{ "login": "changyeli", "id": 9058204, "node_id": "MDQ6VXNlcjkwNTgyMDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/9058204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/changyeli", "html_url": "https://github.com/changyeli", "followers_url": "https://api.github.com/users/changyeli/followers", "following_url": "https://api.github.com/users/changyeli/following{/other_user}", "gists_url": "https://api.github.com/users/changyeli/gists{/gist_id}", "starred_url": "https://api.github.com/users/changyeli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/changyeli/subscriptions", "organizations_url": "https://api.github.com/users/changyeli/orgs", "repos_url": "https://api.github.com/users/changyeli/repos", "events_url": "https://api.github.com/users/changyeli/events{/privacy}", "received_events_url": "https://api.github.com/users/changyeli/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "@palooney \r\nCan you check again ?. The following call worked for me\r\n```\r\nmodel = AutoModel.from_pretrained(\"SpanBERT/spanbert-base-cased\")\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Closing, feel free to reopen if it isn't fixed." ]
1,588
1,594
1,594
NONE
null
I have some questions regarding SpanBert loading using the transformers package. I downloaded the pre-trained file from [SpanBert](https://github.com/facebookresearch/SpanBERT) GitHub Repo and ```vocab.txt``` from Bert. Here is the code I used for loading: ```python model = BertModel.from_pretrained(config_file=config_file, pretrained_model_name_or_path=model_file, vocab_file=vocab_file) model.to("cuda") ``` where - ```config_file``` -> ```config.json``` - ```model_file``` -> ```pytorch_model.bin``` - ```vocab_file``` -> ```vocab.txt``` But I got the ```UnicodeDecoderError``` with the above code saying that ```'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte``` I also tried loading SpanBert with the method mentioned [here](https://huggingface.co/SpanBERT/spanbert-large-cased). But it returned ```OSError: file SpanBERT/spanbert-base-cased not found```. Do you have any suggestions on loading the pre-trained model correctly? Any suggestions are much appreciated. Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4136/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4136/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4135
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4135/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4135/comments
https://api.github.com/repos/huggingface/transformers/issues/4135/events
https://github.com/huggingface/transformers/issues/4135
611,487,504
MDU6SXNzdWU2MTE0ODc1MDQ=
4,135
Why does ALBERT use einsum in PyTorch implementation while in TF one it does not?
{ "login": "marrrcin", "id": 6958772, "node_id": "MDQ6VXNlcjY5NTg3NzI=", "avatar_url": "https://avatars.githubusercontent.com/u/6958772?v=4", "gravatar_id": "", "url": "https://api.github.com/users/marrrcin", "html_url": "https://github.com/marrrcin", "followers_url": "https://api.github.com/users/marrrcin/followers", "following_url": "https://api.github.com/users/marrrcin/following{/other_user}", "gists_url": "https://api.github.com/users/marrrcin/gists{/gist_id}", "starred_url": "https://api.github.com/users/marrrcin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/marrrcin/subscriptions", "organizations_url": "https://api.github.com/users/marrrcin/orgs", "repos_url": "https://api.github.com/users/marrrcin/repos", "events_url": "https://api.github.com/users/marrrcin/events{/privacy}", "received_events_url": "https://api.github.com/users/marrrcin/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The two implementations are equivalent, but the Pytorch version is cumbersome.\r\nI think the code\r\n\r\n```python\r\n context_layer = context_layer.permute(0, 2, 1, 3).contiguous()\r\n\r\n # Should find a better way to do this\r\n w = (\r\n self.dense.weight.t()\r\n .view(self.num_attention_heads, self.attention_head_size, self.hidden_size)\r\n .to(context_layer.dtype)\r\n )\r\n b = self.dense.bias.to(context_layer.dtype)\r\n\r\n projected_context_layer = torch.einsum(\"bfnd,ndh->bfh\", context_layer, w) + b\r\n```\r\nshould be rewritten by\r\n\r\n```python\r\n context_layer = context_layer.permute(0, 2, 1, 3).contiguous()\r\n new_shape = context_layer.size()[:-2] + (-1,)\r\n context_layer = context_layer.view(*new_shape)\r\n\r\n projected_context_layer = self.dense(context_layer)\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Ping @LysandreJik ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Bump", "Ping @LysandreJik again", "For no particular reason, but it might not have been the best choice according to [this thread on performance](https://github.com/huggingface/transformers/issues/6771).", "OK, thanks." ]
1,588
1,602
1,602
CONTRIBUTOR
null
# ❓ Questions & Help I wanted to learn internals of ALBERT model from your implementation (which is BTW really clean in comparison to the original one - good job!), but I've stumbled upon weird looking part in the `AlbertAttention`: https://github.com/huggingface/transformers/blob/6af3306a1da0322f58861b1fbb62ce5223d97b8a/src/transformers/modeling_albert.py#L258 Why does PyTorch version use `einsum`-based notation while calculating hidden state (with manual usage of `dense` layer's weights), while the TensorFlow version just reshapes the `context_layer` and does standard "forward" on dense layer? https://github.com/huggingface/transformers/blob/6af3306a1da0322f58861b1fbb62ce5223d97b8a/src/transformers/modeling_tf_albert.py#L296 I would really like to know the explanation of this implementation - @LysandreJik cloud you shed some light here?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4135/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4135/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4134
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4134/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4134/comments
https://api.github.com/repos/huggingface/transformers/issues/4134/events
https://github.com/huggingface/transformers/issues/4134
611,479,305
MDU6SXNzdWU2MTE0NzkzMDU=
4,134
jplu/tf-xlm-roberta-large showing random performance
{ "login": "mobassir94", "id": 24439592, "node_id": "MDQ6VXNlcjI0NDM5NTky", "avatar_url": "https://avatars.githubusercontent.com/u/24439592?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mobassir94", "html_url": "https://github.com/mobassir94", "followers_url": "https://api.github.com/users/mobassir94/followers", "following_url": "https://api.github.com/users/mobassir94/following{/other_user}", "gists_url": "https://api.github.com/users/mobassir94/gists{/gist_id}", "starred_url": "https://api.github.com/users/mobassir94/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mobassir94/subscriptions", "organizations_url": "https://api.github.com/users/mobassir94/orgs", "repos_url": "https://api.github.com/users/mobassir94/repos", "events_url": "https://api.github.com/users/mobassir94/events{/privacy}", "received_events_url": "https://api.github.com/users/mobassir94/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "same here, mine is torch version, though.\r\n@mobassir94 Did you fix it?", "@HenryPaik1 no\r\ni also faced this issue in both pytorch xla and in tf tpu" ]
1,588
1,611
1,594
NONE
null
# 🐛 jplu/tf-xlm-roberta-large showing random performance on tpu model training ## i have observed several times that jplu/tf-xlm-roberta-large showing random performance on several experiments,for example if i train a model and get validation accuracy 61% then if i run the model again (without changing anythin or doing 0% change in code) then second i get over 96% accuracy and sometimes i observed that validation accuracy stay same for all epoch and again this same code will get me totally different train statistics next time,why is this happening? Model I am using jplu/tf-xlm-roberta-large Language I am using the model on (English, Chinese ,turkish,spanish etc): The problem arises when using: jplu/tf-xlm-roberta-large for finetuning The tasks I am working on is: kaggle ongoing competition : jigsaw multilingual toxic comment classification where train set contails all english comments and 0/1(toxic/non toxic) label and validation+test data contains non English comments ## To reproduce Steps to reproduce the behavior: 1. just simply fork and run this kernel : https://www.kaggle.com/mobassir/understanding-cross-lingual-models and you will never get same result and probably will get bit more worst result ## Expected behavior same model training several times should always produce close result if not exact same, the randomness is high,,sometimes model with 91% validation accuracy can get us 61% validation accuracy on next run and sometimes validation accuracy of that same model get frozen and seems like the model is not learning anything (i repeat the model name : it's tf xlm roberta large) ## Environment info kaggle kernels/notebooks using tpu
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4134/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4134/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4133
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4133/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4133/comments
https://api.github.com/repos/huggingface/transformers/issues/4133/events
https://github.com/huggingface/transformers/issues/4133
611,443,102
MDU6SXNzdWU2MTE0NDMxMDI=
4,133
problem about change from pytorch-pretrained-bert to transformers
{ "login": "BiggHeadd", "id": 30789106, "node_id": "MDQ6VXNlcjMwNzg5MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/30789106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BiggHeadd", "html_url": "https://github.com/BiggHeadd", "followers_url": "https://api.github.com/users/BiggHeadd/followers", "following_url": "https://api.github.com/users/BiggHeadd/following{/other_user}", "gists_url": "https://api.github.com/users/BiggHeadd/gists{/gist_id}", "starred_url": "https://api.github.com/users/BiggHeadd/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BiggHeadd/subscriptions", "organizations_url": "https://api.github.com/users/BiggHeadd/orgs", "repos_url": "https://api.github.com/users/BiggHeadd/repos", "events_url": "https://api.github.com/users/BiggHeadd/events{/privacy}", "received_events_url": "https://api.github.com/users/BiggHeadd/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "OK, the problem is, in transformers, forward() method args position is little bit different compare to pytorch-pretrained-bert. the order change from (token, segment, mask) to (token, mask, segment). So pass the args by name will always right, lol" ]
1,588
1,588
1,588
NONE
null
I am doing the sentiment analysis with pytorch-pretrained-bert, it work correct, but when I change to transformers, the accuracy on the dev data is poor (but the accuracy with pytorch-pretrained-bert is quite higher). Besides, I notice the Migrating from pytorch-pretrained-bert to transformers module. the input is: [CLS] sentence [SEP] aspect [SEP] and just using BertForSequenceClassification (also try add an classifier after BertModel), but all fail on transformers. Can Sombody figure out what's wrong with it? Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4133/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4133/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4132
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4132/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4132/comments
https://api.github.com/repos/huggingface/transformers/issues/4132/events
https://github.com/huggingface/transformers/pull/4132
611,400,627
MDExOlB1bGxSZXF1ZXN0NDEyNTk0MTEx
4,132
Create README.md
{ "login": "savasy", "id": 6584825, "node_id": "MDQ6VXNlcjY1ODQ4MjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6584825?v=4", "gravatar_id": "", "url": "https://api.github.com/users/savasy", "html_url": "https://github.com/savasy", "followers_url": "https://api.github.com/users/savasy/followers", "following_url": "https://api.github.com/users/savasy/following{/other_user}", "gists_url": "https://api.github.com/users/savasy/gists{/gist_id}", "starred_url": "https://api.github.com/users/savasy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/savasy/subscriptions", "organizations_url": "https://api.github.com/users/savasy/orgs", "repos_url": "https://api.github.com/users/savasy/repos", "events_url": "https://api.github.com/users/savasy/events{/privacy}", "received_events_url": "https://api.github.com/users/savasy/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4132?src=pr&el=h1) Report\n> Merging [#4132](https://codecov.io/gh/huggingface/transformers/pull/4132?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6af3306a1da0322f58861b1fbb62ce5223d97b8a&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4132/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4132?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4132 +/- ##\n=======================================\n Coverage 78.84% 78.84% \n=======================================\n Files 114 114 \n Lines 18689 18689 \n=======================================\n+ Hits 14735 14736 +1 \n+ Misses 3954 3953 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4132?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4132/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.55% <0.00%> (+0.41%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4132?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4132?src=pr&el=footer). Last update [6af3306...d6aedbe](https://codecov.io/gh/huggingface/transformers/pull/4132?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Thanks, @savasy! Can you add code fences around the code blocks?" ]
1,588
1,588
1,588
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4132/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4132/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4132", "html_url": "https://github.com/huggingface/transformers/pull/4132", "diff_url": "https://github.com/huggingface/transformers/pull/4132.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4132.patch", "merged_at": 1588944449000 }
https://api.github.com/repos/huggingface/transformers/issues/4131
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4131/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4131/comments
https://api.github.com/repos/huggingface/transformers/issues/4131/events
https://github.com/huggingface/transformers/pull/4131
611,376,443
MDExOlB1bGxSZXF1ZXN0NDEyNTc4Njc2
4,131
Make transformers-cli cross-platform
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4131?src=pr&el=h1) Report\n> Merging [#4131](https://codecov.io/gh/huggingface/transformers/pull/4131?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1cdd2ad2afb73f6af185aafecb7dd7941a90c4d1&el=desc) will **decrease** coverage by `0.11%`.\n> The diff coverage is `0.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4131/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4131?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4131 +/- ##\n==========================================\n- Coverage 78.85% 78.74% -0.12% \n==========================================\n Files 114 115 +1 \n Lines 18688 18712 +24 \n==========================================\n- Hits 14737 14734 -3 \n- Misses 3951 3978 +27 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4131?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/commands/transformers\\_cli.py](https://codecov.io/gh/huggingface/transformers/pull/4131/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy90cmFuc2Zvcm1lcnNfY2xpLnB5) | `0.00% <0.00%> (ø)` | |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4131/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.14% <0.00%> (-0.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4131/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.61% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4131/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.92% <0.00%> (-0.13%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4131?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4131?src=pr&el=footer). Last update [1cdd2ad...3f8c3f5](https://codecov.io/gh/huggingface/transformers/pull/4131?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Thoughts @aaugustin?", "Yes, this is the best way to provide a cross-platform command in a Python library.\r\n\r\nSee https://github.com/django/django/pull/2116 for another project making a similar change.", "Alright, good for you @LysandreJik @thomwolf?" ]
1,588
1,590
1,590
COLLABORATOR
null
Using `scripts` is a useful option in `setup.py` particularly when you want to get access to non-python scripts. However, in this case we want to have an entry point into some of our own Python scripts. To do this in a concise, cross-platfom way, we can use `entry_points.console_scripts`. This change is necessary to provide the CLI on different platforms (particularly also including Windows), which "scripts" does not ensure. Usage remains the same, but the "transformers-cli" script has to be moved (be part of the library) and renamed (underscore + extension). So even though the script itself has been renamed to `transformers_cli.py` and moved to `src/commands`, its usage is still the same: `transformers-cli <args>`. Tested on Ubuntu and Windows.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4131/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4131/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4131", "html_url": "https://github.com/huggingface/transformers/pull/4131", "diff_url": "https://github.com/huggingface/transformers/pull/4131.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4131.patch", "merged_at": 1590501651000 }
https://api.github.com/repos/huggingface/transformers/issues/4130
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4130/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4130/comments
https://api.github.com/repos/huggingface/transformers/issues/4130/events
https://github.com/huggingface/transformers/issues/4130
611,354,140
MDU6SXNzdWU2MTEzNTQxNDA=
4,130
GPT-2 past behaves incorrectly when attention_mask is used
{ "login": "s-jse", "id": 60150701, "node_id": "MDQ6VXNlcjYwMTUwNzAx", "avatar_url": "https://avatars.githubusercontent.com/u/60150701?v=4", "gravatar_id": "", "url": "https://api.github.com/users/s-jse", "html_url": "https://github.com/s-jse", "followers_url": "https://api.github.com/users/s-jse/followers", "following_url": "https://api.github.com/users/s-jse/following{/other_user}", "gists_url": "https://api.github.com/users/s-jse/gists{/gist_id}", "starred_url": "https://api.github.com/users/s-jse/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/s-jse/subscriptions", "organizations_url": "https://api.github.com/users/s-jse/orgs", "repos_url": "https://api.github.com/users/s-jse/repos", "events_url": "https://api.github.com/users/s-jse/events{/privacy}", "received_events_url": "https://api.github.com/users/s-jse/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Solved. `attention_mask ` should be the full tensor (not just the last step), even when `past` is used." ]
1,588
1,588
1,588
NONE
null
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): GPT-2 Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: ```python from transformers import GPT2LMHeadModel import torch model = GPT2LMHeadModel.from_pretrained('gpt2') model.eval() # w/ `attention_mask` w/ `past` input_ids = torch.tensor([[0, 0, 1, 2]]) position_ids = torch.tensor([[0, 0, 1, 2]]) attention_mask = torch.tensor([[0, 1, 1, 1]]) output = model(input_ids=input_ids, position_ids=position_ids, attention_mask=attention_mask) input_ids = torch.tensor([[3]]) position_ids = torch.tensor([[3]]) attention_mask = torch.tensor([[1]]) output = model(input_ids=input_ids, position_ids=position_ids, attention_mask=attention_mask, past=output[1]) print(output[0][0, -1, :].detach().numpy()) # w/ `attention_mask` w/o `past` input_ids = torch.tensor([[0, 0, 1, 2, 3]]) position_ids = torch.tensor([[0, 0, 1, 2, 3]]) attention_mask = torch.tensor([[0, 1, 1, 1, 1]]) output = model(input_ids=input_ids, position_ids=position_ids, attention_mask=attention_mask) print(output[0][0, -1, :].detach().numpy()) # w/o `attention_mask` w/ `past` input_ids = torch.tensor([[0, 1, 2]]) position_ids = torch.tensor([[0, 1, 2]]) output = model(input_ids=input_ids, position_ids=position_ids) input_ids = torch.tensor([[3]]) position_ids = torch.tensor([[3]]) output = model(input_ids=input_ids, position_ids=position_ids, past=output[1]) print(output[0][0, -1, :].detach().numpy()) # w/o `attention_mask` w/o `past` input_ids = torch.tensor([[0, 1, 2, 3]]) position_ids = torch.tensor([[0, 1, 2, 3]]) output = model(input_ids=input_ids, position_ids=position_ids) print(output[0][0, -1, :].detach().numpy()) ``` ## Expected behavior All printed outputs should be equal. But this is what I see: ``` [-63.085373 -62.34121 -61.08647 ... -76.70221 -73.235085 -65.88774 ] [-67.13892 -66.132126 -64.81706 ... -80.93089 -77.780525 -70.23946 ] [-67.1389 -66.13211 -64.81704 ... -80.930855 -77.780495 -70.239426] [-67.13892 -66.132126 -64.81706 ... -80.93089 -77.780525 -70.23946 ] ``` Specifically, it seems like `past` does not work correctly when used with `attention_mask`. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.8.0 - Platform: Linux-5.4.19-100.fc30.x86_64-x86_64-with-fedora-30-Thirty - Python version: 3.7.6 - PyTorch version (GPU?): 1.5.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4130/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4130/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4129
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4129/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4129/comments
https://api.github.com/repos/huggingface/transformers/issues/4129/events
https://github.com/huggingface/transformers/issues/4129
611,334,490
MDU6SXNzdWU2MTEzMzQ0OTA=
4,129
Parse in tensorflow strings as well as normal strings
{ "login": "sachinruk", "id": 1410927, "node_id": "MDQ6VXNlcjE0MTA5Mjc=", "avatar_url": "https://avatars.githubusercontent.com/u/1410927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sachinruk", "html_url": "https://github.com/sachinruk", "followers_url": "https://api.github.com/users/sachinruk/followers", "following_url": "https://api.github.com/users/sachinruk/following{/other_user}", "gists_url": "https://api.github.com/users/sachinruk/gists{/gist_id}", "starred_url": "https://api.github.com/users/sachinruk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sachinruk/subscriptions", "organizations_url": "https://api.github.com/users/sachinruk/orgs", "repos_url": "https://api.github.com/users/sachinruk/repos", "events_url": "https://api.github.com/users/sachinruk/events{/privacy}", "received_events_url": "https://api.github.com/users/sachinruk/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Hey, any updates about this issue? I'm facing the exact same problem!", "@sachinruk @joaopedromattos any luck here in the end :)" ]
1,588
1,646
1,594
CONTRIBUTOR
null
# 🚀 Feature request So this is something I came across while trying to get a (fast) streaming dataset via the tensorflow data API for TPUs. This is a crosspost of [this SO question](https://stackoverflow.com/questions/61555097/mapping-text-data-through-huggingface-tokenizer/). I have my encode function that looks like this: ```python from transformers import BertTokenizer, BertModel MODEL = 'bert-base-multilingual-uncased' tokenizer = BertTokenizer.from_pretrained(MODEL) def encode(texts, tokenizer=tokenizer, maxlen=10): # import pdb; pdb.set_trace() inputs = tokenizer.encode_plus( texts, return_tensors='tf', return_attention_masks=True, return_token_type_ids=True, pad_to_max_length=True, max_length=maxlen ) return inputs['input_ids'], inputs["token_type_ids"], inputs["attention_mask"] ``` I want to get my data **encoded on the fly** by doing this: ```python x_train = (tf.data.Dataset.from_tensor_slices(df_train.comment_text.astype(str).values) .map(encode)) ``` However, this chucks the error: ``` ValueError: Input is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers. ``` Now from my understanding when I set a breakpoint inside `encode` it was because I was sending a non-numpy array. How do I get huggingface transformers to play nice with tensorflow strings as inputs? If you need a dummy dataframe here it is: ``` df_train = pd.DataFrame({'comment_text': ['Today was a good day']*5}) ``` ## What I tried So I tried to use `from_generator` so that I can parse in the strings to the `encode_plus` function. However, this does not work with TPUs. ```python AUTO = tf.data.experimental.AUTOTUNE def get_gen(df): def gen(): for i in range(len(df)): yield encode(df.loc[i, 'comment_text']) , df.loc[i, 'toxic'] return gen shapes = ((tf.TensorShape([maxlen]), tf.TensorShape([maxlen]), tf.TensorShape([maxlen])), tf.TensorShape([])) train_dataset = tf.data.Dataset.from_generator( get_gen(df_train), ((tf.int32, tf.int32, tf.int32), tf.int32), shapes ) train_dataset = train_dataset.batch(BATCH_SIZE).prefetch(AUTO) ``` My final solution was to encode the all the strings using `batch_encode_plus` but to write those tensors out to disk. This currently takes ~1 hour, but seems like an inefficient way of prepping data for training. ## Version Info: `transformers.__version__, tf.__version__` => `('2.7.0', '2.1.0')`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4129/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4129/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4128
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4128/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4128/comments
https://api.github.com/repos/huggingface/transformers/issues/4128/events
https://github.com/huggingface/transformers/pull/4128
611,324,049
MDExOlB1bGxSZXF1ZXN0NDEyNTQyNzI2
4,128
Add decoder specific error message for T5Stack.forward
{ "login": "enzoampil", "id": 39557688, "node_id": "MDQ6VXNlcjM5NTU3Njg4", "avatar_url": "https://avatars.githubusercontent.com/u/39557688?v=4", "gravatar_id": "", "url": "https://api.github.com/users/enzoampil", "html_url": "https://github.com/enzoampil", "followers_url": "https://api.github.com/users/enzoampil/followers", "following_url": "https://api.github.com/users/enzoampil/following{/other_user}", "gists_url": "https://api.github.com/users/enzoampil/gists{/gist_id}", "starred_url": "https://api.github.com/users/enzoampil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/enzoampil/subscriptions", "organizations_url": "https://api.github.com/users/enzoampil/orgs", "repos_url": "https://api.github.com/users/enzoampil/repos", "events_url": "https://api.github.com/users/enzoampil/events{/privacy}", "received_events_url": "https://api.github.com/users/enzoampil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4128?src=pr&el=h1) Report\n> Merging [#4128](https://codecov.io/gh/huggingface/transformers/pull/4128?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1cdd2ad2afb73f6af185aafecb7dd7941a90c4d1&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `0.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4128/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4128?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4128 +/- ##\n==========================================\n- Coverage 78.85% 78.84% -0.01% \n==========================================\n Files 114 114 \n Lines 18688 18689 +1 \n==========================================\n- Hits 14737 14736 -1 \n- Misses 3951 3953 +2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4128?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4128/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.66% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4128/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.92% <0.00%> (-0.13%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4128?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4128?src=pr&el=footer). Last update [1cdd2ad...a301d78](https://codecov.io/gh/huggingface/transformers/pull/4128?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,588
1,588
1,588
CONTRIBUTOR
null
#### This PR adds a decoder specific error message in `T5Stack.forward` in the case that `self.is_decoder == True` This was requested in #4126 and suggested by @patrickvonplaten in #3626 . Confirmed that this works as expected in this colab [notebook](https://colab.research.google.com/drive/1j1mdtOylXZClH-ikZthDiOxXg4DOis_4?usp=sharing). Missing `decoder_input_ids` example: ``` ## Setup t5-small and tokenized inputs from transformers import T5Tokenizer, T5Model tokenizer = T5Tokenizer.from_pretrained('t5-small') model = T5Model.from_pretrained('t5-small') input_ids = tokenizer.encode("Hello, my dog is cute", return_tensors="pt") # Batch size 1 # Test missing `decoder_input_ids` model(input_ids=input_ids)[0] # ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds ``` Missing `input_ids` example: ``` model(decoder_input_ids=input_ids)[0] # ValueError: You have to specify either input_ids or inputs_embeds ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4128/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4128/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4128", "html_url": "https://github.com/huggingface/transformers/pull/4128", "diff_url": "https://github.com/huggingface/transformers/pull/4128.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4128.patch", "merged_at": 1588502408000 }
https://api.github.com/repos/huggingface/transformers/issues/4127
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4127/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4127/comments
https://api.github.com/repos/huggingface/transformers/issues/4127/events
https://github.com/huggingface/transformers/issues/4127
611,323,721
MDU6SXNzdWU2MTEzMjM3MjE=
4,127
Model output should be dict
{ "login": "max-yue", "id": 13486398, "node_id": "MDQ6VXNlcjEzNDg2Mzk4", "avatar_url": "https://avatars.githubusercontent.com/u/13486398?v=4", "gravatar_id": "", "url": "https://api.github.com/users/max-yue", "html_url": "https://github.com/max-yue", "followers_url": "https://api.github.com/users/max-yue/followers", "following_url": "https://api.github.com/users/max-yue/following{/other_user}", "gists_url": "https://api.github.com/users/max-yue/gists{/gist_id}", "starred_url": "https://api.github.com/users/max-yue/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/max-yue/subscriptions", "organizations_url": "https://api.github.com/users/max-yue/orgs", "repos_url": "https://api.github.com/users/max-yue/repos", "events_url": "https://api.github.com/users/max-yue/events{/privacy}", "received_events_url": "https://api.github.com/users/max-yue/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "We will soon convert the outputs of models to named tuples to that it will be more readable :-) " ]
1,588
1,588
1,588
CONTRIBUTOR
null
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> I think model output should be a dict. Since dict are more clearly than tuple, we do not know what output[1] means, but we do know output["pooled_ouptut"] means. https://github.com/huggingface/transformers#models-always-output-tuples ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> ## Your contribution <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4127/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4127/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4126
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4126/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4126/comments
https://api.github.com/repos/huggingface/transformers/issues/4126/events
https://github.com/huggingface/transformers/issues/4126
611,322,660
MDU6SXNzdWU2MTEzMjI2NjA=
4,126
No decoder specific error message for T5Stack.forward
{ "login": "enzoampil", "id": 39557688, "node_id": "MDQ6VXNlcjM5NTU3Njg4", "avatar_url": "https://avatars.githubusercontent.com/u/39557688?v=4", "gravatar_id": "", "url": "https://api.github.com/users/enzoampil", "html_url": "https://github.com/enzoampil", "followers_url": "https://api.github.com/users/enzoampil/followers", "following_url": "https://api.github.com/users/enzoampil/following{/other_user}", "gists_url": "https://api.github.com/users/enzoampil/gists{/gist_id}", "starred_url": "https://api.github.com/users/enzoampil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/enzoampil/subscriptions", "organizations_url": "https://api.github.com/users/enzoampil/orgs", "repos_url": "https://api.github.com/users/enzoampil/repos", "events_url": "https://api.github.com/users/enzoampil/events{/privacy}", "received_events_url": "https://api.github.com/users/enzoampil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,588
1,588
1,588
CONTRIBUTOR
null
When I run a forward pass with `T5Model` with a missing value for `decoder_input_ids`, it still gives me the same error message compared to when there is a missing value for `input_ids` (below). `ValueError: You have to specify either input_ids or inputs_embeds` I've confirmed with @patrickvonplaten that it makes sense to have a specific error message for the case when `self.is_decoder == True` in issue #3626.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4126/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4126/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4125
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4125/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4125/comments
https://api.github.com/repos/huggingface/transformers/issues/4125/events
https://github.com/huggingface/transformers/issues/4125
611,294,182
MDU6SXNzdWU2MTEyOTQxODI=
4,125
Is it possible to have high perplexity of some individual sentences, as compared to overall testing corpus perplexity?
{ "login": "states786", "id": 64096105, "node_id": "MDQ6VXNlcjY0MDk2MTA1", "avatar_url": "https://avatars.githubusercontent.com/u/64096105?v=4", "gravatar_id": "", "url": "https://api.github.com/users/states786", "html_url": "https://github.com/states786", "followers_url": "https://api.github.com/users/states786/followers", "following_url": "https://api.github.com/users/states786/following{/other_user}", "gists_url": "https://api.github.com/users/states786/gists{/gist_id}", "starred_url": "https://api.github.com/users/states786/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/states786/subscriptions", "organizations_url": "https://api.github.com/users/states786/orgs", "repos_url": "https://api.github.com/users/states786/repos", "events_url": "https://api.github.com/users/states786/events{/privacy}", "received_events_url": "https://api.github.com/users/states786/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,588
1,594
1,594
NONE
null
Hi, I have calculated an overall perplexity of my testing corpus after finetuning GPT-2 pretrained model Similarly, I have also calculated perplexity of individual sentences, after extracting it from my testing corpus. **I notice that some individual sentence perplexity is even higher than the overall perplexity.** What will be the reason for it? Will it be possible? Have I done something wrong in my calculation? Kindly share your opinion.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4125/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4125/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4124
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4124/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4124/comments
https://api.github.com/repos/huggingface/transformers/issues/4124/events
https://github.com/huggingface/transformers/issues/4124
611,258,268
MDU6SXNzdWU2MTEyNTgyNjg=
4,124
convert pytorch_model.pt [pretrained Bert model] to pytorch_model.onnx (ONNX)
{ "login": "pumpkinband", "id": 39296817, "node_id": "MDQ6VXNlcjM5Mjk2ODE3", "avatar_url": "https://avatars.githubusercontent.com/u/39296817?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pumpkinband", "html_url": "https://github.com/pumpkinband", "followers_url": "https://api.github.com/users/pumpkinband/followers", "following_url": "https://api.github.com/users/pumpkinband/following{/other_user}", "gists_url": "https://api.github.com/users/pumpkinband/gists{/gist_id}", "starred_url": "https://api.github.com/users/pumpkinband/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pumpkinband/subscriptions", "organizations_url": "https://api.github.com/users/pumpkinband/orgs", "repos_url": "https://api.github.com/users/pumpkinband/repos", "events_url": "https://api.github.com/users/pumpkinband/events{/privacy}", "received_events_url": "https://api.github.com/users/pumpkinband/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "May be of interest to @mfuntowicz ", "Hi @pumpkinband, \r\n\r\nI don't have the exact details of the model you're trying to load, but I'd assume a tensor of shape (1, 3) would make more sense (batch, sequence). ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,588
1,595
1,595
NONE
null
i use this to convert pytorch_model.pt to pytorch_model.onnx ``` from torch.autograd import Variable import torch.onnx import torchvision import torch dummy_input = Variable(torch.randn(1, 3, 256, 256)) model = torch.load('./pytorch_model.pt') torch.onnx.export(model, dummy_input, "pytorch_model.onnx") ``` ERROR :- `The expanded size of the tensor (256) must match the existing size (3) at non-singleton dimension 3. Target sizes: [1, 3, 256, 256]. Tensor sizes: [1, 3]` Are my `dummy_inputs` incorrect ?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4124/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4124/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4123
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4123/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4123/comments
https://api.github.com/repos/huggingface/transformers/issues/4123/events
https://github.com/huggingface/transformers/pull/4123
611,250,481
MDExOlB1bGxSZXF1ZXN0NDEyNDkxODkz
4,123
Text corrections
{ "login": "lapolonio", "id": 1810412, "node_id": "MDQ6VXNlcjE4MTA0MTI=", "avatar_url": "https://avatars.githubusercontent.com/u/1810412?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lapolonio", "html_url": "https://github.com/lapolonio", "followers_url": "https://api.github.com/users/lapolonio/followers", "following_url": "https://api.github.com/users/lapolonio/following{/other_user}", "gists_url": "https://api.github.com/users/lapolonio/gists{/gist_id}", "starred_url": "https://api.github.com/users/lapolonio/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lapolonio/subscriptions", "organizations_url": "https://api.github.com/users/lapolonio/orgs", "repos_url": "https://api.github.com/users/lapolonio/repos", "events_url": "https://api.github.com/users/lapolonio/events{/privacy}", "received_events_url": "https://api.github.com/users/lapolonio/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@lapolonio - thanks a lot for these fixes. I already applied them in the main PR :-) " ]
1,588
1,588
1,588
NONE
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4123/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4123/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4123", "html_url": "https://github.com/huggingface/transformers/pull/4123", "diff_url": "https://github.com/huggingface/transformers/pull/4123.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4123.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/4122
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4122/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4122/comments
https://api.github.com/repos/huggingface/transformers/issues/4122/events
https://github.com/huggingface/transformers/issues/4122
611,237,350
MDU6SXNzdWU2MTEyMzczNTA=
4,122
ValueError: You are attempting to pad samples but the tokenizer you are using (GPT2Tokenizer) does not have one.
{ "login": "zixiliuUSC", "id": 49173327, "node_id": "MDQ6VXNlcjQ5MTczMzI3", "avatar_url": "https://avatars.githubusercontent.com/u/49173327?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zixiliuUSC", "html_url": "https://github.com/zixiliuUSC", "followers_url": "https://api.github.com/users/zixiliuUSC/followers", "following_url": "https://api.github.com/users/zixiliuUSC/following{/other_user}", "gists_url": "https://api.github.com/users/zixiliuUSC/gists{/gist_id}", "starred_url": "https://api.github.com/users/zixiliuUSC/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zixiliuUSC/subscriptions", "organizations_url": "https://api.github.com/users/zixiliuUSC/orgs", "repos_url": "https://api.github.com/users/zixiliuUSC/repos", "events_url": "https://api.github.com/users/zixiliuUSC/events{/privacy}", "received_events_url": "https://api.github.com/users/zixiliuUSC/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1834053007, "node_id": "MDU6TGFiZWwxODM0MDUzMDA3", "url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20LM%20(Pretraining)", "name": "Ex: LM (Pretraining)", "color": "76FFAF", "default": false, "description": "Related to language modeling pre-training" } ]
closed
false
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[ { "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }, { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "I had the same error; I think the problem lies in \"line_by_line\" since if I remove this option, my code can run fine.", "@BramVanroy - sorry to just link you here. Did we decide to add a force-padding option here or not yet? ", "We had the discussion over here https://github.com/huggingface/transformers/pull/3388#discussion_r407697829\r\n\r\n@GCHQResearcher92457 mentions that they \"tried to implement your suggestion\". Perhaps it's better if you @patrickvonplaten could implement review the changes in the updated PR here https://github.com/huggingface/transformers/pull/4009/commits/5ff6eb7073fc6959b4cd5464defddc101d3163f8 ?", "Any progress about this issue? I have the same problem if I use line_by_line with gpt2. Thanks", "> Any progress about this issue? I have the same problem if I use line_by_line with gpt2. Thanks\r\n\r\nSame problem with v2.9.1", "Personally, I think the fix is just that you _can't_ use the line_by_line dataset with gpt2 (because it doesn't have a padding token)\r\n\r\n@patrickvonplaten @BramVanroy Should I just raise an error that tells user to remove the `--line_by_line` flag from their command?", "@julien-c Perhaps you can have a look at PR https://github.com/huggingface/transformers/commit/5ff6eb7073fc6959b4cd5464defddc101d3163f8 there, they suggest to add a force_padding_token option so that if a model does not have a padding token by default, it is added to the vocabulary manually.\r\n\r\nI have no preference: I like the implementation in the PR but it might not be what you would want or expect. Raising an error is also fine for me.", "@julien-c I am stuck with same error. If not line by line how else can I train the GPT2 model from scratch? \r\n\r\nHere is my GPT2 config and language Model:\r\n\r\nfrom transformers import GPT2LMHeadModel, GPT2Config\r\n\r\n```\r\n# Initializing a GPT2 configuration\r\nconfiguration = GPT2Config(vocab_size=52_000)\r\nmodel = GPT2LMHeadModel(config=configuration)\r\nThe logic for Dataset Preparation:\r\n\r\nfrom transformers import LineByLineTextDataset\r\n\r\ndataset = LineByLineTextDataset(\r\n tokenizer=tokenizer,\r\n file_path=\"./deu-de_web-public_2019_1M-sentences.txt\",\r\n block_size=128,\r\n)\r\nfrom transformers import DataCollatorForLanguageModeling\r\n\r\ndata_collator = DataCollatorForLanguageModeling(\r\n tokenizer=tokenizer, mlm=False,\r\n)\r\n```\r\nThe training logic:\r\n\r\n```\r\nfrom transformers import Trainer, TrainingArguments\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir=\"./output\",\r\n overwrite_output_dir=True,\r\n num_train_epochs=1,\r\n per_gpu_train_batch_size=64,\r\n save_steps=10_000,\r\n save_total_limit=2,\r\n)\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n data_collator=data_collator,\r\n train_dataset=dataset,\r\n prediction_loss_only=True,\r\n)\r\ntrainer.train()\r\n```\r\n\r\n> Throws me: ValueError: You are attempting to pad samples but the tokenizer you are using (GPT2Tokenizer) does not have one.\r\n", "You need to use `TextDataset` and cannot use line by line at the moment.\r\nYou can build a large file and use your own special tokens if you build it completely from scratch or just reuse `<|endoftext|>` from pre-trained GPT-2.", "@borisdayma Thaks for the quick reply!\r\n\r\nWhere can I find more how these models can be trained with what kind of datasets and what kind of tokenizers and special tokens?\r\nAlso, Can this be used for the reformer too?\r\n\r\nPlease help me so that I can create simple and clear collab notebook and share it here so that others can easily use it. ", "A good place to start would be the language_modeling section of the [examples page from the doc](https://huggingface.co/transformers/examples.html)", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Hi there, \r\nWe are facing same issue. \r\nFrom the thread above I am not sure what is the fix. If I dont use line_by_line, it defines its own sequence size and concatenates the sequences from multiple lines which are unrelated. How can I make it take each lines separately as sequence.", "You should set the PAD token manually equal to the EOS token. See https://github.com/huggingface/transformers/issues/2630 as well", "Thanks @patrickvonplaten it worked. All i had to do is to add the following after tokenizer initialization\r\n\r\n\t# bug manual fix for GPT2\r\n\t# https://github.com/huggingface/transformers/issues/3021\r\n\tif model_args.model_type == 'gpt2':\r\n\t\ttokenizer.pad_token = tokenizer.eos_token\r\n", "I'm not sure of the consequences of this. To be safe you probably also should set the IDs, then. Something like this:\r\n\r\n```\r\ntokenizer.pad_token_id = tokenizer.eos_token_id\r\n```\r\n\r\nEDIT: this is wrong, see below", "@BramVanroy - it's actually not possible to set the ids equal to each other, doing `tokenizer.pad_token = tokenizer.eos_token` should work and is my recommend way of doing it :-) ", "@patrickvonplaten yes I tried that too bot could not set the ids", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "> # bug manual fix for GPT2\r\n> # https://github.com/huggingface/transformers/issues/3021\r\n> if model_args.model_type == 'gpt2':\r\n> \ttokenizer.pad_token = tokenizer.eos_token\r\n\r\nI used this solution and the error went away. Though, this introduced a new problem for me - the model couldn't generate <|endoftext|> during inference. The model didn't learn to generate eos_token because it was ignored while computing the loss as it is same as pad_token. I had to use some other token as pad_token.\r\n\r\nOther than this, I also had to add eos_token to each list in LineByLineDataset.examples.\r\n\r\nNote: I am using transformers 3.4.0." ]
1,588
1,614
1,609
NONE
null
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...):GPT2 Language I am using the model on (English, Chinese ...):English The problem arises when using: my own modified scripts: (give details below) python examples/run_language_modeling.py \ --train_data_file=temp_gpt2/gpt2.train \ --output_dir=checkpoints/gpt2 \ --model_type=gpt2 \ --model_name_or_path=gpt2 \ --eval_data_file=temp_gpt2/test.txt \ --line_by_line \ --do_train \ --do_eval \ --evaluate_during_training \ --per_gpu_train_batch_size=20 \ --per_gpu_eval_batch_size=20 \ --gradient_accumulation_steps=1 \ --learning_rate=8e-5 \ --weight_decay=0.075 \ --adam_epsilon=1e-8 \ --warmup_steps=500 \ --max_grad_norm=5.0 \ --num_train_epochs=20 \ --logging_steps=500 \ --save_steps=500 The tasks I am working on is: * [ ] an official GLUE/SQUaD task: language modeling * [ ] my own task or dataset: (give details below) Conll2014 GEC ## To reproduce Steps to reproduce the behavior: 1. run the script I get the following error: ``` bash train_gpt2.sh 05/02/2020 10:14:25 - INFO - transformers.training_args - PyTorch: setting up devices 05/02/2020 10:14:25 - WARNING - __main__ - Process rank: -1, device: cuda, n_gpu: 2, distributed training: False, 16-bits training: False 05/02/2020 10:14:25 - INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir='checkpoints/gpt2', overwrite_output_dir=False, do_train=True, do_eval=True, do_predict=False, evaluate_during_training=True, per_gpu_train_batch_size=20, per_gpu_eval_batch_size=20, gradient_accumulation_steps=1, learning_rate=8e-05, weight_decay=0.075, adam_epsilon=1e-08, max_grad_norm=5.0, num_train_epochs=20.0, max_steps=-1, warmup_steps=500, logging_dir=None, logging_first_step=False, logging_steps=500, save_steps=500, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1) 05/02/2020 10:14:35 - INFO - transformers.configuration_utils - loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-config.json from cache at /home/zixi/.cache/torch/transformers/4be02c5697d91738003fb1685c9872f284166aa32e061576bbe6aaeb95649fcf.085d5f6a8e7812ea05ff0e6ed0645ab2e75d80387ad55c1ad9806ee70d272f80 05/02/2020 10:14:35 - INFO - transformers.configuration_utils - Model config GPT2Config { "activation_function": "gelu_new", "attn_pdrop": 0.1, "bos_token_id": 50256, "embd_pdrop": 0.1, "eos_token_id": 50256, "initializer_range": 0.02, "layer_norm_epsilon": 1e-05, "model_type": "gpt2", "n_ctx": 1024, "n_embd": 768, "n_head": 12, "n_layer": 12, "n_positions": 1024, "resid_pdrop": 0.1, "summary_activation": null, "summary_first_dropout": 0.1, "summary_proj_to_labels": true, "summary_type": "cls_index", "summary_use_proj": true, "vocab_size": 50257 } 05/02/2020 10:14:39 - INFO - transformers.configuration_utils - loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-config.json from cache at /home/zixi/.cache/torch/transformers/4be02c5697d91738003fb1685c9872f284166aa32e061576bbe6aaeb95649fcf.4c1d7fc2ac6ddabeaf0c8bec2ffc7dc112f668f5871a06efcff113d2797ec7d5 05/02/2020 10:14:39 - INFO - transformers.configuration_utils - Model config GPT2Config { "activation_function": "gelu_new", "architectures": [ "GPT2LMHeadModel" ], "attn_pdrop": 0.1, "bos_token_id": 50256, "embd_pdrop": 0.1, "eos_token_id": 50256, "initializer_range": 0.02, "layer_norm_epsilon": 1e-05, "model_type": "gpt2", "n_ctx": 1024, "n_embd": 768, "n_head": 12, "n_layer": 12, "n_positions": 1024, "resid_pdrop": 0.1, "summary_activation": null, "summary_first_dropout": 0.1, "summary_proj_to_labels": true, "summary_type": "cls_index", "summary_use_proj": true, "vocab_size": 50257 } 05/02/2020 10:14:42 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-vocab.json from cache at /home/zixi/.cache/torch/transformers/f2808208f9bec2320371a9f5f891c184ae0b674ef866b79c58177067d15732dd.1512018be4ba4e8726e41b9145129dc30651ea4fec86aa61f4b9f40bf94eac71 05/02/2020 10:14:42 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-merges.txt from cache at /home/zixi/.cache/torch/transformers/d629f792e430b3c76a1291bb2766b0a047e36fae0588f9dbc1ae51decdff691b.70bec105b4158ed9a1747fea67a43f5dee97855c64d62b6ec3742f4cfdb5feda 05/02/2020 10:14:43 - INFO - transformers.modeling_utils - loading weights file https://cdn.huggingface.co/gpt2-pytorch_model.bin from cache at /home/zixi/.cache/torch/transformers/d71fd633e58263bd5e91dd3bde9f658bafd81e11ece622be6a3c2e4d42d8fd89.778cf36f5c4e5d94c8cd9cefcf2a580c8643570eb327f0d4a1f007fab2acbdf1 05/02/2020 10:14:47 - INFO - transformers.modeling_utils - Weights of GPT2LMHeadModel not initialized from pretrained model: ['h.0.attn.masked_bias', 'h.1.attn.masked_bias', 'h.2.attn.masked_bias', 'h.3.attn.masked_bias', 'h.4.attn.masked_bias', 'h.5.attn.masked_bias', 'h.6.attn.masked_bias', 'h.7.attn.masked_bias', 'h.8.attn.masked_bias', 'h.9.attn.masked_bias', 'h.10.attn.masked_bias', 'h.11.attn.masked_bias', 'lm_head.weight'] 05/02/2020 10:14:47 - INFO - transformers.data.datasets.language_modeling - Creating features from dataset file at temp_gpt2/gpt2_train.txt 05/02/2020 10:16:41 - INFO - transformers.data.datasets.language_modeling - Creating features from dataset file at temp_gpt2/gpt2_test.txt 05/02/2020 10:16:44 - INFO - transformers.trainer - ***** Running training ***** 05/02/2020 10:16:44 - INFO - transformers.trainer - Num examples = 1130686 05/02/2020 10:16:44 - INFO - transformers.trainer - Num Epochs = 20 05/02/2020 10:16:44 - INFO - transformers.trainer - Instantaneous batch size per GPU = 20 05/02/2020 10:16:44 - INFO - transformers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 40 05/02/2020 10:16:44 - INFO - transformers.trainer - Gradient Accumulation steps = 1 05/02/2020 10:16:44 - INFO - transformers.trainer - Total optimization steps = 565360 Epoch: 0%| | 0/20 [00:00<?, ?it/sTraceback (most recent call last): | 0/28268 [00:00<?, ?it/s] File "examples/run_language_modeling.py", line 284, in <module> main() File "examples/run_language_modeling.py", line 254, in main trainer.train(model_path=model_path) File "/home/zixi/anaconda3/envs/ROC/lib/python3.7/site-packages/transformers/trainer.py", line 307, in train for step, inputs in enumerate(epoch_iterator): File "/home/zixi/anaconda3/envs/ROC/lib/python3.7/site-packages/tqdm/std.py", line 1107, in __iter__ for obj in iterable: File "/home/zixi/anaconda3/envs/ROC/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 345, in __next__ data = self._next_data() File "/home/zixi/anaconda3/envs/ROC/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 385, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/home/zixi/anaconda3/envs/ROC/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch return self.collate_fn(data) File "/home/zixi/anaconda3/envs/ROC/lib/python3.7/site-packages/transformers/data/data_collator.py", line 91, in collate_batch batch = self._tensorize_batch(examples) File "/home/zixi/anaconda3/envs/ROC/lib/python3.7/site-packages/transformers/data/data_collator.py", line 106, in _tensorize_batch "You are attempting to pad samples but the tokenizer you are using" ValueError: You are attempting to pad samples but the tokenizer you are using (GPT2Tokenizer) does not have one. Epoch: 0%| | 0/20 [00:00<?, ?it/s] Iteration: 0%| ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version:master - Platform:Ubuntu 18.04LTS - Python version:3.7 - PyTorch version (GPU?):1.5 - Tensorflow version (GPU?):-- - Using GPU in script?:yes - Using distributed or parallel set-up in script?:No
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4122/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4122/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4121
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4121/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4121/comments
https://api.github.com/repos/huggingface/transformers/issues/4121/events
https://github.com/huggingface/transformers/issues/4121
611,223,929
MDU6SXNzdWU2MTEyMjM5Mjk=
4,121
`eos_token_id` breaks the `T5ForConditionalGeneration`
{ "login": "girishponkiya", "id": 2093282, "node_id": "MDQ6VXNlcjIwOTMyODI=", "avatar_url": "https://avatars.githubusercontent.com/u/2093282?v=4", "gravatar_id": "", "url": "https://api.github.com/users/girishponkiya", "html_url": "https://github.com/girishponkiya", "followers_url": "https://api.github.com/users/girishponkiya/followers", "following_url": "https://api.github.com/users/girishponkiya/following{/other_user}", "gists_url": "https://api.github.com/users/girishponkiya/gists{/gist_id}", "starred_url": "https://api.github.com/users/girishponkiya/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/girishponkiya/subscriptions", "organizations_url": "https://api.github.com/users/girishponkiya/orgs", "repos_url": "https://api.github.com/users/girishponkiya/repos", "events_url": "https://api.github.com/users/girishponkiya/events{/privacy}", "received_events_url": "https://api.github.com/users/girishponkiya/received_events", "type": "User", "site_admin": false }
[ { "id": 1834059054, "node_id": "MDU6TGFiZWwxODM0MDU5MDU0", "url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20Generation", "name": "Ex: Generation", "color": "06EFF8", "default": false, "description": "Natural Language Generation" } ]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "Hi @girishponkiya,\r\n\r\nThanks for the detailed error description. I was not able to reproduce the error with the newest version of transformers, 2.9.0.\r\n\r\nSee google colab here: https://colab.research.google.com/drive/14TQzMBPuaNb6aZ_NoVFQrr_ZTJS2mCr3 . Can you check again with a newer version of transformers?", "Hi @patrickvonplaten,\r\n\r\nBecause of a bug in `tokenizers` (v0.7.0), the following code sets a wrong value to `eos_token_id`:\r\n```python\r\n# I want to set <extra_id_1> as end of a sequence.\r\neos_token_id = t5_tokenizer.encode('<extra_id_1>')[0]\r\n```\r\n\r\nThe correct value of `eos_token_id` should be 32098 (index of <extra_id_1>). Just replace the above snippet with the following to reproduce the error:\r\n```python\r\n# I want to set <extra_id_1> as end of a sequence.\r\neos_token_id = 32098 # t5_tokenizer.encode('<extra_id_1>')[0]\r\n```\r\n\r\nPS: for more details on the tokenizer bug, you may refer #4021 ", "same issue with GPT-2 to generate texts using beam search, eos_token_id set to 50256(<|endoftext|>)", "I had the same issue with GPT-2 and fixed it by hacking the code in `transformers/modeling_utils.py`\r\nfrom\r\n<pre>\r\nif eos_token_id is not None and all(\r\n (token_id % vocab_size).item() <b>is not</b> eos_token_id for token_id in next_tokens[batch_idx]\r\n):\r\n assert torch.all(\r\n next_scores[batch_idx, :num_beams] == beam_scores.view(batch_size, num_beams)[batch_idx]\r\n ), \"If batch_idx is not done, final next scores: {} have to equal to accumulated beam_scores: {}\".format(\r\n next_scores[:, :num_beams][batch_idx], beam_scores.view(batch_size, num_beams)[batch_idx],\r\n )\r\n</pre>\r\nto\r\n<pre>\r\nif eos_token_id is not None and all(\r\n (token_id % vocab_size).item() <b>!=</b> eos_token_id for token_id in next_tokens[batch_idx]\r\n):\r\n assert torch.all(\r\n next_scores[batch_idx, :num_beams] == beam_scores.view(batch_size, num_beams)[batch_idx]\r\n ), \"If batch_idx is not done, final next scores: {} have to equal to accumulated beam_scores: {}\".format(\r\n next_scores[:, :num_beams][batch_idx], beam_scores.view(batch_size, num_beams)[batch_idx],\r\n )\r\n</pre>\r\nHope it could help.", "@junyann : Thanks, that's not a hack, should be the right fix. Irrespective of whether the the issues are solely caused by this, the \"is not\" comparison is wrong and should be fixed.\r\n\r\n@patrickvonplaten : Thoughts?\r\n\r\nEncountered the same assertion failure today and was looking to reproduce, ended up here.\r\n " ]
1,588
1,591
1,591
CONTRIBUTOR
null
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): `T5ForConditionalGeneration` Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) ```python from transformers import T5Tokenizer, T5Config, T5ForConditionalGeneration T5_PATH = 't5-base' # "t5-small", "t5-base", "t5-large", "t5-3b", "t5-11b" DEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # My envirnment uses CPU t5_tokenizer = T5Tokenizer.from_pretrained(T5_PATH) t5_config = T5Config.from_pretrained(T5_PATH) t5_mlm = T5ForConditionalGeneration.from_pretrained(T5_PATH, config=t5_config).to(DEVICE) text = 'India is a <extra_id_0> of the world. </s>' encoded = t5_tokenizer.encode_plus(text, add_special_tokens=True, return_tensors='pt') input_ids = encoded['input_ids'].to(DEVICE) # I want to set <extra_id_1> as end of a sequence. eos_token_id = t5_tokenizer.encode('<extra_id_1>')[0] outputs = t5_mlm.generate(input_ids=input_ids, num_beams=200, num_return_sequences=20, eos_token_id=eos_token_id, # <-- This is causes the error. Just removing it works fine. max_length=5) results = [t5_tokenizer.decode(output, skip_special_tokens=False, clean_up_tokenization_spaces=False) for output in outputs] print(results) ``` The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) * [x] Just playing with the `T5` model ## To reproduce Steps to reproduce the behavior: 1. Start a new Google Colab notebook, 2. Install transformers using `pip install git+https://github.com/huggingface/transformers`, 3. Restart the runtime, and 4. Run the above snippet. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ``` AssertionError Traceback (most recent call last) <ipython-input-86-aae5bd45e59c> in <module>() 2 num_beams=200, num_return_sequences=20, max_length=5, 3 # repetition_penalty=1.2, ----> 4 eos_token_id=eos_token_id, 5 ) 6 # print(outputs) 2 frames /usr/local/lib/python3.6/dist-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs) 13 def decorate_context(*args, **kwargs): 14 with self: ---> 15 return func(*args, **kwargs) 16 return decorate_context 17 /usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py in generate(self, input_ids, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, num_return_sequences, attention_mask, decoder_start_token_id) 993 vocab_size=vocab_size, 994 encoder_outputs=encoder_outputs, --> 995 attention_mask=attention_mask, 996 ) 997 else: /usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py in _generate_beam_search(self, input_ids, cur_len, max_length, min_length, do_sample, early_stopping, temperature, top_k, top_p, repetition_penalty, no_repeat_ngram_size, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, decoder_start_token_id, batch_size, num_return_sequences, length_penalty, num_beams, vocab_size, encoder_outputs, attention_mask) 1359 next_scores[batch_idx, :num_beams] == beam_scores.view(batch_size, num_beams)[batch_idx] 1360 ), "If batch_idx is not done, final next scores: {} have to equal to accumulated beam_scores: {}".format( -> 1361 next_scores[:, :num_beams][batch_idx], beam_scores.view(batch_size, num_beams)[batch_idx], 1362 ) 1363 AssertionError: If batch_idx is not done, final next scores: tensor([-3.5015, -4.4583, -4.5677, -4.5808, -4.7325, -4.8779, -5.1127, -5.2106, -5.2147, -5.3292, -5.3575, -5.5227, -5.6846, -5.7290, -5.8227, -5.8522, -5.8537, -5.9930, -6.0119, -6.0440, -6.0469, -6.0496, -6.0739, -6.0959, -6.1007, -6.1125, -6.1519, -6.2492, -6.2850, -6.3033, -6.3035, -6.3677, -6.3798, -6.4438, -6.4508, -6.4511, -6.4993, -6.5149, -6.5269, -6.5330, -6.5335, -6.5662, -6.6071, -6.6222, -6.6331, -6.6437, -6.6454, -6.6660, -6.7163, -6.7690, -6.7841, -6.8139, -6.8147, -6.8157, -6.8297, -6.8297, -6.8326, -6.8448, -6.8493, -6.8763, -6.8866, -6.8871, -6.9026, -6.9046, -6.9078, -6.9475, -6.9544, -6.9654, -6.9672, -6.9735, -6.9749, -6.9795, -7.0020, -7.0024, -7.0124, -7.0310, -7.0837, -7.1405, -7.1407, -7.1420, -7.1652, -7.1675, -7.1788, -7.1800, -7.1875, -7.2146, -7.2148, -7.2294, -7.2538, -7.2765, -7.2775, -7.2859, -7.3110, -7.3120, -7.3189, -7.3403, -7.3438, -7.3479, -7.3535, -7.3592, -7.3733, -7.3740, -7.3842, -7.3848, -7.3882, -7.3889, -7.4192, -7.4192, -7.4211, -7.4657, -7.4662, -7.4774, -7.4776, -7.4862, -7.4999, -7.5150, -7.5156, -7.5187, -7.5214, -7.5253, -7.5365, -7.5376, -7.5547, -7.5566, -7.5815, -7.5922, -7.5940, -7.5960, -7.5996, -7.6115, -7.6128, -7.6406, -7.6485, -7.6619, -7.6744, -7.6752, -7.6804, -7.6875, -7.6928, -7.6979, -7.7064, -7.7172, -7.7229, -7.7429, -7.7431, -7.7482, -7.7510, -7.7576, -7.7619, -7.7641, -7.7655, -7.7721, -7.7787, -7.7792, -7.7842, -7.7859, -7.8005, -7.8083, -7.8087, -7.8128, -7.8145, -7.8184, -7.8185, -7.8197, -7.8228, -7.8275, -7.8381, -7.8478, -7.8572, -7.8598, -7.8677, -7.8712, -7.8755, -7.8820, -7.8867, -7.9135, -7.9308, -7.9485, -7.9613, -7.9629, -7.9649, -7.9706, -7.9714, -7.9739, -7.9740, -7.9757, -7.9779, -7.9809, -7.9858, -7.9964, -7.9992, -8.0047, -8.0086, -8.0105, -8.0255, -8.0500, -8.0550, -8.0604, -8.0811, -8.0858]) have to equal to accumulated beam_scores: tensor([-4.7325, -5.1127, -5.2106, -5.3292, -5.8227, -5.8522, -6.0739, -6.0959, -6.1007, -6.1125, -6.2492, -6.3033, -6.3035, -6.3798, -6.4438, -6.4508, -6.4511, -6.4993, -6.5269, -6.5330, -6.5662, -6.6331, -6.6660, -6.7163, -6.8139, -6.8157, -6.8297, -6.8297, -6.8493, -6.8763, -6.8871, -6.9026, -6.9078, -6.9475, -6.9735, -6.9749, -7.0020, -7.0024, -7.0837, -7.1405, -7.1420, -7.1652, -7.1800, -7.1875, -7.2146, -7.2294, -7.2538, -7.2765, -7.2775, -7.2859, -7.3189, -7.3403, -7.3438, -7.3535, -7.3592, -7.3733, -7.4192, -7.4192, -7.4657, -7.4662, -7.4774, -7.4999, -7.5150, -7.5156, -7.5187, -7.5253, -7.5365, -7.5547, -7.5566, -7.5815, -7.5922, -7.5940, -7.5996, -7.6115, -7.6128, -7.6406, -7.6619, -7.6744, -7.6752, -7.6804, -7.6875, -7.7172, -7.7229, -7.7429, -7.7431, -7.7482, -7.7510, -7.7619, -7.7641, -7.7655, -7.7787, -7.7792, -7.7842, -7.8083, -7.8087, -7.8145, -7.8184, -7.8185, -7.8197, -7.8228, -7.8275, -7.8478, -7.8572, -7.8598, -7.8677, -7.8712, -7.8755, -7.8820, -7.8867, -7.9613, -7.9629, -7.9706, -7.9714, -7.9739, -7.9740, -7.9757, -7.9779, -7.9809, -7.9858, -7.9964, -7.9992, -8.0047, -8.0086, -8.0255, -8.0500, -8.0550, -8.0604, -8.0811, -8.0858, -8.0862, -8.0880, -8.1197, -8.1198, -8.1261, -8.1331, -8.1337, -8.1364, -8.1509, -8.1633, -8.1681, -8.1759, -8.1843, -8.2007, -8.2019, -8.2094, -8.2223, -8.2236, -8.2504, -8.2580, -8.2653, -8.2748, -8.2778, -8.2782, -8.2860, -8.2931, -8.2932, -8.2963, -8.3047, -8.3111, -8.3161, -8.3215, -8.3255, -8.3393, -8.3439, -8.3567, -8.3608, -8.3706, -8.3810, -8.3858, -8.4023, -8.4076, -8.4111, -8.4153, -8.4198, -8.4335, -8.4365, -8.4390, -8.4397, -8.4480, -8.4511, -8.4532, -8.4594, -8.4663, -8.4681, -8.4686, -8.4720, -8.4726, -8.4778, -8.4781, -8.4786, -8.4885, -8.5027, -8.5173, -8.5181, -8.5226, -8.5298, -8.5308, -8.5341, -8.5463, -8.5466]) ``` ## Expected behavior If I remove the `eos_token_id` argument from `t5_mlm.generate(...)` (refer the given code snippet), it returns the following (expected) output: ``` ['<extra_id_0> cornerstone<extra_id_1>', '<extra_id_0> part<extra_id_1> developing', '<extra_id_0> huge part<extra_id_1>', '<extra_id_0> big part<extra_id_1>', '<extra_id_0> beautiful part<extra_id_1>', '<extra_id_0> very important part', '<extra_id_0> part<extra_id_1> larger', '<extra_id_0> unique part<extra_id_1>', '<extra_id_0> part<extra_id_1> developed', '<extra_id_0> part<extra_id_1> large', '<extra_id_0> beautiful country in', '<extra_id_0> part of the', '<extra_id_0> small part<extra_id_1>', '<extra_id_0> part<extra_id_1> great', '<extra_id_0> part<extra_id_1> bigger', '<extra_id_0> country in the', '<extra_id_0> large part<extra_id_1>', '<extra_id_0> part<extra_id_1> middle', '<extra_id_0> significant part<extra_id_1>', '<extra_id_0> part<extra_id_1> vast'] ``` ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.8.0 - Platform: Google Colab - Python version: 3.6 - PyTorch version (GPU?): 1.5.0+cu101 - Tensorflow version (GPU?): N/A - Using GPU in script?: No - Using distributed or parallel set-up in script?: No
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4121/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4121/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4120
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4120/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4120/comments
https://api.github.com/repos/huggingface/transformers/issues/4120/events
https://github.com/huggingface/transformers/pull/4120
611,215,138
MDExOlB1bGxSZXF1ZXN0NDEyNDY3OTc1
4,120
fixed utils_summarization import path
{ "login": "boxabirds", "id": 147305, "node_id": "MDQ6VXNlcjE0NzMwNQ==", "avatar_url": "https://avatars.githubusercontent.com/u/147305?v=4", "gravatar_id": "", "url": "https://api.github.com/users/boxabirds", "html_url": "https://github.com/boxabirds", "followers_url": "https://api.github.com/users/boxabirds/followers", "following_url": "https://api.github.com/users/boxabirds/following{/other_user}", "gists_url": "https://api.github.com/users/boxabirds/gists{/gist_id}", "starred_url": "https://api.github.com/users/boxabirds/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/boxabirds/subscriptions", "organizations_url": "https://api.github.com/users/boxabirds/orgs", "repos_url": "https://api.github.com/users/boxabirds/repos", "events_url": "https://api.github.com/users/boxabirds/events{/privacy}", "received_events_url": "https://api.github.com/users/boxabirds/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "I think you need to run `make style` and resolve merge conflicts.\r\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,588
1,599
1,599
NONE
null
As per issue #3827 I've removed the dot in the import.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4120/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4120/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4120", "html_url": "https://github.com/huggingface/transformers/pull/4120", "diff_url": "https://github.com/huggingface/transformers/pull/4120.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4120.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/4119
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4119/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4119/comments
https://api.github.com/repos/huggingface/transformers/issues/4119/events
https://github.com/huggingface/transformers/pull/4119
611,214,398
MDExOlB1bGxSZXF1ZXN0NDEyNDY3NTA4
4,119
Fix markdown to show the results table properly
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4119?src=pr&el=h1) Report\n> Merging [#4119](https://codecov.io/gh/huggingface/transformers/pull/4119?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1cdd2ad2afb73f6af185aafecb7dd7941a90c4d1&el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4119/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4119?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4119 +/- ##\n==========================================\n- Coverage 78.85% 78.84% -0.02% \n==========================================\n Files 114 114 \n Lines 18688 18688 \n==========================================\n- Hits 14737 14734 -3 \n- Misses 3951 3954 +3 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4119?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.14% <0.00%> (-0.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.61% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.92% <0.00%> (-0.13%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4119?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4119?src=pr&el=footer). Last update [1cdd2ad...397a837](https://codecov.io/gh/huggingface/transformers/pull/4119?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,588
1,588
1,588
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4119/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4119/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4119", "html_url": "https://github.com/huggingface/transformers/pull/4119", "diff_url": "https://github.com/huggingface/transformers/pull/4119.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4119.patch", "merged_at": 1588775910000 }
https://api.github.com/repos/huggingface/transformers/issues/4118
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4118/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4118/comments
https://api.github.com/repos/huggingface/transformers/issues/4118/events
https://github.com/huggingface/transformers/pull/4118
611,189,690
MDExOlB1bGxSZXF1ZXN0NDEyNDUxNjEw
4,118
Update run_pl_ner.py
{ "login": "williamFalcon", "id": 3640001, "node_id": "MDQ6VXNlcjM2NDAwMDE=", "avatar_url": "https://avatars.githubusercontent.com/u/3640001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/williamFalcon", "html_url": "https://github.com/williamFalcon", "followers_url": "https://api.github.com/users/williamFalcon/followers", "following_url": "https://api.github.com/users/williamFalcon/following{/other_user}", "gists_url": "https://api.github.com/users/williamFalcon/gists{/gist_id}", "starred_url": "https://api.github.com/users/williamFalcon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/williamFalcon/subscriptions", "organizations_url": "https://api.github.com/users/williamFalcon/orgs", "repos_url": "https://api.github.com/users/williamFalcon/repos", "events_url": "https://api.github.com/users/williamFalcon/events{/privacy}", "received_events_url": "https://api.github.com/users/williamFalcon/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks @williamFalcon – do we need to bump the minimal version of `pytorch-lightning` then?", "We're backward compatible, so no need to bump but 0.7.5 has forward support for native amp and solves a few key bugs that were present in earlier versions. I would recommend bumping up." ]
1,588
1,588
1,588
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4118/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4118/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4118", "html_url": "https://github.com/huggingface/transformers/pull/4118", "diff_url": "https://github.com/huggingface/transformers/pull/4118.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4118.patch", "merged_at": 1588430302000 }
https://api.github.com/repos/huggingface/transformers/issues/4117
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4117/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4117/comments
https://api.github.com/repos/huggingface/transformers/issues/4117/events
https://github.com/huggingface/transformers/pull/4117
611,189,483
MDExOlB1bGxSZXF1ZXN0NDEyNDUxNDg2
4,117
Update run_pl_glue.py
{ "login": "williamFalcon", "id": 3640001, "node_id": "MDQ6VXNlcjM2NDAwMDE=", "avatar_url": "https://avatars.githubusercontent.com/u/3640001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/williamFalcon", "html_url": "https://github.com/williamFalcon", "followers_url": "https://api.github.com/users/williamFalcon/followers", "following_url": "https://api.github.com/users/williamFalcon/following{/other_user}", "gists_url": "https://api.github.com/users/williamFalcon/gists{/gist_id}", "starred_url": "https://api.github.com/users/williamFalcon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/williamFalcon/subscriptions", "organizations_url": "https://api.github.com/users/williamFalcon/orgs", "repos_url": "https://api.github.com/users/williamFalcon/repos", "events_url": "https://api.github.com/users/williamFalcon/events{/privacy}", "received_events_url": "https://api.github.com/users/williamFalcon/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Same question as https://github.com/huggingface/transformers/pull/4118#issuecomment-622963241" ]
1,588
1,588
1,588
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4117/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4117/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4117", "html_url": "https://github.com/huggingface/transformers/pull/4117", "diff_url": "https://github.com/huggingface/transformers/pull/4117.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4117.patch", "merged_at": 1588430311000 }
https://api.github.com/repos/huggingface/transformers/issues/4116
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4116/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4116/comments
https://api.github.com/repos/huggingface/transformers/issues/4116/events
https://github.com/huggingface/transformers/pull/4116
611,182,351
MDExOlB1bGxSZXF1ZXN0NDEyNDQ3NjE4
4,116
Updated
{ "login": "qianchu", "id": 19550979, "node_id": "MDQ6VXNlcjE5NTUwOTc5", "avatar_url": "https://avatars.githubusercontent.com/u/19550979?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qianchu", "html_url": "https://github.com/qianchu", "followers_url": "https://api.github.com/users/qianchu/followers", "following_url": "https://api.github.com/users/qianchu/following{/other_user}", "gists_url": "https://api.github.com/users/qianchu/gists{/gist_id}", "starred_url": "https://api.github.com/users/qianchu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qianchu/subscriptions", "organizations_url": "https://api.github.com/users/qianchu/orgs", "repos_url": "https://api.github.com/users/qianchu/repos", "events_url": "https://api.github.com/users/qianchu/events{/privacy}", "received_events_url": "https://api.github.com/users/qianchu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,588
1,588
1,588
NONE
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4116/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4116/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4116", "html_url": "https://github.com/huggingface/transformers/pull/4116", "diff_url": "https://github.com/huggingface/transformers/pull/4116.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4116.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/4115
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4115/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4115/comments
https://api.github.com/repos/huggingface/transformers/issues/4115/events
https://github.com/huggingface/transformers/pull/4115
611,164,305
MDExOlB1bGxSZXF1ZXN0NDEyNDM3NTg2
4,115
Create model card
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "Great!", "You do not rest even on Saturday! 🦾" ]
1,588
1,588
1,588
CONTRIBUTOR
null
Create Model card for distilroberta-base-finetuned-sentiment
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4115/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4115/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4115", "html_url": "https://github.com/huggingface/transformers/pull/4115", "diff_url": "https://github.com/huggingface/transformers/pull/4115.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4115.patch", "merged_at": 1588432772000 }
https://api.github.com/repos/huggingface/transformers/issues/4114
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4114/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4114/comments
https://api.github.com/repos/huggingface/transformers/issues/4114/events
https://github.com/huggingface/transformers/pull/4114
611,158,856
MDExOlB1bGxSZXF1ZXN0NDEyNDM0MDQw
4,114
model card for surajp/albert-base-sanskrit
{ "login": "parmarsuraj99", "id": 9317265, "node_id": "MDQ6VXNlcjkzMTcyNjU=", "avatar_url": "https://avatars.githubusercontent.com/u/9317265?v=4", "gravatar_id": "", "url": "https://api.github.com/users/parmarsuraj99", "html_url": "https://github.com/parmarsuraj99", "followers_url": "https://api.github.com/users/parmarsuraj99/followers", "following_url": "https://api.github.com/users/parmarsuraj99/following{/other_user}", "gists_url": "https://api.github.com/users/parmarsuraj99/gists{/gist_id}", "starred_url": "https://api.github.com/users/parmarsuraj99/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/parmarsuraj99/subscriptions", "organizations_url": "https://api.github.com/users/parmarsuraj99/orgs", "repos_url": "https://api.github.com/users/parmarsuraj99/repos", "events_url": "https://api.github.com/users/parmarsuraj99/events{/privacy}", "received_events_url": "https://api.github.com/users/parmarsuraj99/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4114?src=pr&el=h1) Report\n> Merging [#4114](https://codecov.io/gh/huggingface/transformers/pull/4114?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/abb1fa3f374811ea09d0bc3440d820c50735008d&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4114/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4114?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4114 +/- ##\n=======================================\n Coverage 78.85% 78.85% \n=======================================\n Files 114 114 \n Lines 18688 18688 \n=======================================\n Hits 14736 14736 \n Misses 3952 3952 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4114?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4114/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.14% <0.00%> (-0.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4114/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.04% <0.00%> (+0.12%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4114?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4114?src=pr&el=footer). Last update [abb1fa3...e50935f](https://codecov.io/gh/huggingface/transformers/pull/4114?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Awesome! And our first model for Sanskrit. \r\n\r\n[model page](https://huggingface.co/surajp/albert-base-sanskrit)" ]
1,588
1,588
1,588
CONTRIBUTOR
null
Added Model card for `surajp/albert-base-sanskrit`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4114/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4114/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4114", "html_url": "https://github.com/huggingface/transformers/pull/4114", "diff_url": "https://github.com/huggingface/transformers/pull/4114.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4114.patch", "merged_at": 1588432539000 }
https://api.github.com/repos/huggingface/transformers/issues/4113
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4113/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4113/comments
https://api.github.com/repos/huggingface/transformers/issues/4113/events
https://github.com/huggingface/transformers/pull/4113
611,134,447
MDExOlB1bGxSZXF1ZXN0NDEyNDE3OTA2
4,113
[Reformer] Move model card to google model
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,588
1,588
1,588
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4113/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4113/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4113", "html_url": "https://github.com/huggingface/transformers/pull/4113", "diff_url": "https://github.com/huggingface/transformers/pull/4113.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4113.patch", "merged_at": 1588407923000 }