url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/12044 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12044/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12044/comments | https://api.github.com/repos/huggingface/transformers/issues/12044/events | https://github.com/huggingface/transformers/issues/12044 | 912,886,984 | MDU6SXNzdWU5MTI4ODY5ODQ= | 12,044 | Electra model vocabulary | {
"login": "avinashsai",
"id": 22453634,
"node_id": "MDQ6VXNlcjIyNDUzNjM0",
"avatar_url": "https://avatars.githubusercontent.com/u/22453634?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/avinashsai",
"html_url": "https://github.com/avinashsai",
"followers_url": "https://api.github.com/users/avinashsai/followers",
"following_url": "https://api.github.com/users/avinashsai/following{/other_user}",
"gists_url": "https://api.github.com/users/avinashsai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/avinashsai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avinashsai/subscriptions",
"organizations_url": "https://api.github.com/users/avinashsai/orgs",
"repos_url": "https://api.github.com/users/avinashsai/repos",
"events_url": "https://api.github.com/users/avinashsai/events{/privacy}",
"received_events_url": "https://api.github.com/users/avinashsai/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,622 | 1,623 | 1,623 | NONE | null | 1. Electra model vocabulary doesn't show the vocabulary words unlike other models where vocabulary words can be clearly seen.
2. In this link (https://huggingface.co/google/electra-base-discriminator/resolve/main/vocab.txt) strangely all words are [unused0] barring [PAD], [CLS] and few other special tokens.
3. How can I seen vocabulary words for Electra tokenizer?? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12044/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12044/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12043 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12043/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12043/comments | https://api.github.com/repos/huggingface/transformers/issues/12043/events | https://github.com/huggingface/transformers/pull/12043 | 912,853,188 | MDExOlB1bGxSZXF1ZXN0NjYyOTcyNTc2 | 12,043 | [Draft] Wav2Vec2 - Save intermediate PR verifying that implementation matches fairseq ones | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Delete after successful run of Wav2Vec2 PreTraining",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,622 | 1,651 | 1,626 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Using this branch of fairseq: https://github.com/patrickvonplaten/fairseq/pull/1
running this code should work as expected:
```python
#!/usr/bin/env python3
import datasets
import fairseq
import torch
import soundfile as sf
import sys
from fairseq.criterions.wav2vec_criterion import Wav2VecCriterionConfig, Wav2vecCriterion
from fairseq.tasks.audio_pretraining import AudioPretrainingConfig, AudioPretrainingTask
from transformers import Wav2Vec2ForPreTraining, Wav2Vec2FeatureExtractor
hf_path = str(sys.argv[1])
fairseq_wav2vec2_path = str(sys.argv[2])
model, cfg, task = fairseq.checkpoint_utils.load_model_ensemble_and_task([fairseq_wav2vec2_path])
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(hf_path, do_normalize=False)
hf_model = Wav2Vec2ForPreTraining.from_pretrained(hf_path)
model = model[0]
model.eval()
dummy_speech_data = datasets.load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
def map_to_array(batch):
speech_array, _ = sf.read(batch["file"])
batch["speech"] = speech_array
return batch
dummy_speech_data = dummy_speech_data.map(map_to_array, remove_columns=["file"])
inputs = feature_extractor(dummy_speech_data[:3]["speech"], return_tensors="pt", padding="longest", return_attention_mask=True)
input_values = inputs.input_values
attention_mask = inputs.attention_mask
audio_cfg = AudioPretrainingConfig(labels="ltr", data="./data")
task = AudioPretrainingTask.setup_task(audio_cfg)
criterion = Wav2vecCriterion(Wav2VecCriterionConfig(infonce=True, log_keys=["prob_perplexity", "code_perplexity", "temp"], loss_weights=[0.1, 10]), task)
sample = {
"net_input": {
"source": input_values,
"padding_mask": attention_mask.ne(1),
},
"id": torch.zeros((1,)),
}
torch.manual_seed(0)
loss, sample_size, log, result = criterion(model, sample)
torch.manual_seed(0)
hf_result = hf_model(input_values, attention_mask=attention_mask, mask_time_indices=result["mask_indices"], fsq_negs=result["negs"])
hf_logits = hf_result.logits.permute(1, 2, 0)[result["mask_indices"]]
hf_logits = hf_logits.reshape(result['x'].shape[1:] + (-1,)).permute(2, 0, 1)
assert torch.allclose(hf_logits, result['x'], atol=1e-3), "wrong logits"
print("Loss diff %", 100 * (loss.detach().item() - hf_result.loss.detach().item()) / hf_result.loss.detach())
print("perplexity diff %", 100 * (hf_result.prob_perplexity.detach().item() -result["prob_perplexity"].detach().item()) / hf_result.prob_perplexity.detach())
```
and using [this](https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_small.pt) as the fairseq checkpoint and [this](https://huggingface.co/patrickvonplaten/wav2vec2-base) model as the HF model.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12043/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12043/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12043",
"html_url": "https://github.com/huggingface/transformers/pull/12043",
"diff_url": "https://github.com/huggingface/transformers/pull/12043.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12043.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/12042 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12042/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12042/comments | https://api.github.com/repos/huggingface/transformers/issues/12042/events | https://github.com/huggingface/transformers/pull/12042 | 912,837,868 | MDExOlB1bGxSZXF1ZXN0NjYyOTYwMzA0 | 12,042 | Add optional grouped parsers description to HfArgumentParser | {
"login": "peteriz",
"id": 232524,
"node_id": "MDQ6VXNlcjIzMjUyNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/232524?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/peteriz",
"html_url": "https://github.com/peteriz",
"followers_url": "https://api.github.com/users/peteriz/followers",
"following_url": "https://api.github.com/users/peteriz/following{/other_user}",
"gists_url": "https://api.github.com/users/peteriz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/peteriz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/peteriz/subscriptions",
"organizations_url": "https://api.github.com/users/peteriz/orgs",
"repos_url": "https://api.github.com/users/peteriz/repos",
"events_url": "https://api.github.com/users/peteriz/events{/privacy}",
"received_events_url": "https://api.github.com/users/peteriz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks!"
] | 1,622 | 1,623 | 1,623 | CONTRIBUTOR | null | # What does this PR do?
This PR adds optional grouping to the argument parser of `HfArgumentParser` when multiple dataclasses are used (with different sub-grouped parameters, such as optimizer setup, model config, etc.) so that the displayed `-h` will print multiple grouped arguments in a more semantically organized way.
Uses optional attribute `_argument_group_name=<some string>` in the dataclass. If exists in the dataclass a sub parser will be used instead of the root `HfArgumentParser`.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
@peteriz: Updated docstring inline
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12042/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12042/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12042",
"html_url": "https://github.com/huggingface/transformers/pull/12042",
"diff_url": "https://github.com/huggingface/transformers/pull/12042.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12042.patch",
"merged_at": 1623080833000
} |
https://api.github.com/repos/huggingface/transformers/issues/12041 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12041/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12041/comments | https://api.github.com/repos/huggingface/transformers/issues/12041/events | https://github.com/huggingface/transformers/issues/12041 | 912,716,099 | MDU6SXNzdWU5MTI3MTYwOTk= | 12,041 | Why my simple Bert model for text classification could not learn anything? | {
"login": "Hhhhhhhzf",
"id": 39818832,
"node_id": "MDQ6VXNlcjM5ODE4ODMy",
"avatar_url": "https://avatars.githubusercontent.com/u/39818832?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hhhhhhhzf",
"html_url": "https://github.com/Hhhhhhhzf",
"followers_url": "https://api.github.com/users/Hhhhhhhzf/followers",
"following_url": "https://api.github.com/users/Hhhhhhhzf/following{/other_user}",
"gists_url": "https://api.github.com/users/Hhhhhhhzf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hhhhhhhzf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hhhhhhhzf/subscriptions",
"organizations_url": "https://api.github.com/users/Hhhhhhhzf/orgs",
"repos_url": "https://api.github.com/users/Hhhhhhhzf/repos",
"events_url": "https://api.github.com/users/Hhhhhhhzf/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hhhhhhhzf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi there!\r\n\r\nPlease use the forum https://discuss.huggingface.co/ to ask such questions :) We use issues to report bugs and for feature requests.",
"> Hi there!\r\n> \r\n> Please use the forum https://discuss.huggingface.co/ to ask such questions :) We use issues to report bugs and for feature requests.\r\n\r\nI am sorry",
"I find the key of this problem, only need to change the learning rate to 1e-5.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,622 | 1,626 | 1,626 | NONE | null | Hello, I try `transformers.BertModel` to deal with a simple text classification, but the result makes me puzzled.
the code is simple,I implement the model with pytorch.
they are...
```
# a Dataset class for BertModel
class BertDataset(Dataset):
def __init__(self, train_file, tokenizer):
super(BertDataset, self).__init__()
self.train_file = train_file
self.data = []
self.label2id = {}
self.id2label = {}
self.tokenizer = tokenizer
self.init()
def init(self):
with open(self.train_file, 'r', encoding='utf-8') as f:
for line in f:
blocks = line.strip().split('\t')
if blocks[1] not in self.label2id:
self.label2id[blocks[1]] = len(self.label2id)
self.id2label[len(self.id2label)] = blocks[1]
self.data.append({'token': self.tokenizer(blocks[0], add_special_tokens=True, max_length=100,
padding='max_length', return_tensors='pt',
truncation=True),
'label': self.label2id[blocks[1]]})
def __getitem__(self, item):
return self.data[item]
def __len__(self):
return len(self.data)
# a collate function for torch.utils.data.DataLoader
def bert_collate_fn(batch_data):
input_ids, token_type_ids, attention_mask, labels = [], [], [], []
for instance in copy.deepcopy(batch_data):
input_ids.append(instance['token']['input_ids'][0].squeeze(0))
token_type_ids.append(instance['token']['token_type_ids'][0].squeeze(0))
attention_mask.append(instance['token']['attention_mask'][0].squeeze(0))
labels.append(instance['label'])
return torch.stack(input_ids), torch.stack(token_type_ids), \
torch.stack(attention_mask), torch.tensor(labels)
# Model
class PTModel(nn.Module):
def __init__(self, model, n_class):
super(PTModel, self).__init__()
self.n_class = n_class
self.model = model
self.linear = nn.Linear(768, self.n_class)
self.softmax = nn.Softmax(dim=-1)
def forward(self, input_ids, token_type_ids=None, attention_mask=None):
cls_emb = self.model(input_ids=input_ids, token_type_ids=token_type_ids, attention_mask=attention_mask)
cls_emb = cls_emb[0][:, 0, :].squeeze(1)
logits = self.linear(cls_emb)
# logits = self.softmax(logits)
return logits
# train code
def train1():
# data
batch_size = 16
tokenizer = BertTokenizer.from_pretrained(pretrained_path)
dataset = BertDataset('../data/dataset/data.txt', tokenizer)
train_len = int(len(dataset)*0.8)
train_dataset, dev_dataset = random_split(dataset=dataset, lengths=[train_len, len(dataset)-train_len])
train_dataloader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True, collate_fn=bert_collate_fn)
dev_dataloader = DataLoader(dev_dataset, batch_size=batch_size, shuffle=True, collate_fn=bert_collate_fn)
# model
device = torch.device('cuda:{}'.format(args.cuda))
bert_model = BertModel.from_pretrained(pretrained_path)
model = PTModel(model=bert_model, n_class=len(dataset.label2id)).to(device)
optimizer = torch.optim.Adam(params=model.parameters(), lr=args.lr)
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer=optimizer, milestones=[30, 40], gamma=0.1)
loss_func = torch.nn.CrossEntropyLoss()
# train
for i in range(args.epoch):
model.train()
train_loss, dev_loss, f1_train, f1_dev = [], [], [], []
dev_pred_list, dev_gold_list = [], []
for input_ids, token_type_ids, attention_mask, label in tqdm(train_dataloader):
input_ids, token_type_ids, attention_mask, label = input_ids.to(device), token_type_ids.to(device), \
attention_mask.to(device), label.to(device),
outputs = model(input_ids=input_ids, token_type_ids=token_type_ids, attention_mask=attention_mask)
array_outputs = np.array(outputs.cuda().data.cpu())
optimizer.zero_grad()
loss = loss_func(outputs, label)
results = outputs.cuda().data.cpu().argmax(dim=1)
score = f1_score(label.cuda().data.cpu(), results, average='micro')
train_loss.append(loss.item())
f1_train.append(score)
# optim
loss.backward()
optimizer.step()
scheduler.step()
print('epoch {}'.format(i))
print('train_loss:{}'.format(np.mean(train_loss)))
print('train_f1:{}'.format(np.mean(f1_train)))
```
The train log is following(only 10 epoches). And the result was already clear: The model could not learn anything!!!!
PS: the learning rate was 1e-3.
```
100%|█████████████████████████████████████████| 250/250 [00:43<00:00, 5.72it/s]
epoch 0
train_loss:4.217772917747498
train_f1:0.081
100%|███████████████████████████████████████████| 63/63 [00:03<00:00, 19.52it/s]
dev_f1:0.08928571428571429
dev_loss:4.111690880760314
100%|█████████████████████████████████████████| 250/250 [00:43<00:00, 5.71it/s]
epoch 1
train_loss:4.094675525665283
train_f1:0.084
100%|███████████████████████████████████████████| 63/63 [00:03<00:00, 19.16it/s]
dev_f1:0.0882936507936508
dev_loss:4.1316274839734275
100%|█████████████████████████████████████████| 250/250 [00:43<00:00, 5.71it/s]
epoch 2
train_loss:4.084259546279907
train_f1:0.08525
100%|███████████████████████████████████████████| 63/63 [00:03<00:00, 19.37it/s]
dev_f1:0.08928571428571429
dev_loss:4.108004717599778
100%|█████████████████████████████████████████| 250/250 [00:44<00:00, 5.62it/s]
epoch 3
train_loss:4.0770455904006955
train_f1:0.09425
100%|███████████████████████████████████████████| 63/63 [00:03<00:00, 19.07it/s]
dev_f1:0.08928571428571429
dev_loss:4.1077501395392035
100%|█████████████████████████████████████████| 250/250 [00:45<00:00, 5.54it/s]
epoch 4
train_loss:4.070150758743286
train_f1:0.086
100%|███████████████████████████████████████████| 63/63 [00:03<00:00, 19.41it/s]
dev_f1:0.09027777777777778
dev_loss:4.103204295748756
100%|█████████████████████████████████████████| 250/250 [00:45<00:00, 5.52it/s]
epoch 5
train_loss:4.064209712982178
train_f1:0.0895
100%|███████████████████████████████████████████| 63/63 [00:03<00:00, 19.31it/s]
dev_f1:0.08928571428571429
dev_loss:4.117827377622089
100%|█████████████████████████████████████████| 250/250 [00:43<00:00, 5.70it/s]
epoch 6
train_loss:4.065111406326294
train_f1:0.08425
100%|███████████████████████████████████████████| 63/63 [00:03<00:00, 19.34it/s]
dev_f1:0.0882936507936508
dev_loss:4.099656305615864
100%|█████████████████████████████████████████| 250/250 [00:44<00:00, 5.58it/s]
epoch 7
train_loss:4.0547873935699466
train_f1:0.09175
100%|███████████████████████████████████████████| 63/63 [00:03<00:00, 19.30it/s]
dev_f1:0.08928571428571429
dev_loss:4.105985126798115
100%|█████████████████████████████████████████| 250/250 [00:43<00:00, 5.76it/s]
epoch 8
train_loss:4.0595885887145995
train_f1:0.08875
100%|███████████████████████████████████████████| 63/63 [00:03<00:00, 19.26it/s]
dev_f1:0.09027777777777778
dev_loss:4.121003010916332
100%|█████████████████████████████████████████| 250/250 [00:45<00:00, 5.46it/s]
epoch 9
train_loss:4.054850312232971
train_f1:0.08825
100%|███████████████████████████████████████████| 63/63 [00:03<00:00, 18.86it/s]
dev_f1:0.08928571428571429
dev_loss:4.12501887669639
100%|█████████████████████████████████████████| 250/250 [00:45<00:00, 5.46it/s]
epoch 10
train_loss:4.0566882238388065
train_f1:0.08525
100%|███████████████████████████████████████████| 63/63 [00:03<00:00, 18.85it/s]
dev_f1:0.09126984126984126
dev_loss:4.103033436669244
```
Before this BertModel, I have tried LSTM, and the LSTM worked well. the dev f1 reached 0.96.
```
# LSTM
class SimpleModel(nn.Module):
def __init__(self, **kwargs):
super(SimpleModel, self).__init__()
self.embedding = nn.Embedding.from_pretrained(kwargs['pretrained_embedding'], freeze=False)
self.lstm = nn.LSTM(kwargs['pretrained_embedding'].shape[1],
kwargs['hidden_size'],
batch_first=True,
bidirectional=True)
self.linear = nn.Linear(kwargs['hidden_size']*2, kwargs['n_class'])
def forward(self, inputs, lens):
inputs = self.embedding(inputs)
_, (h, _) = self.lstm(pack_padded_sequence(inputs, lens, batch_first=True, enforce_sorted=False))
h = h.permute(1, 0, 2).contiguous().view(h.shape[1], -1)
logits = self.linear(h)
logits = logits.softmax(dim=-1)
return logits
```
Could any good man tell me why this code can't work.
Is there something wrong with my writing?
I have been confused for days....
Thank you very much!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12041/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12041/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12040 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12040/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12040/comments | https://api.github.com/repos/huggingface/transformers/issues/12040/events | https://github.com/huggingface/transformers/pull/12040 | 912,359,621 | MDExOlB1bGxSZXF1ZXN0NjYyNTI2MTgx | 12,040 | Add torch to requirements.txt in language-modeling | {
"login": "cdleong",
"id": 4109253,
"node_id": "MDQ6VXNlcjQxMDkyNTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4109253?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cdleong",
"html_url": "https://github.com/cdleong",
"followers_url": "https://api.github.com/users/cdleong/followers",
"following_url": "https://api.github.com/users/cdleong/following{/other_user}",
"gists_url": "https://api.github.com/users/cdleong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cdleong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cdleong/subscriptions",
"organizations_url": "https://api.github.com/users/cdleong/orgs",
"repos_url": "https://api.github.com/users/cdleong/repos",
"events_url": "https://api.github.com/users/cdleong/events{/privacy}",
"received_events_url": "https://api.github.com/users/cdleong/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks again!"
] | 1,622 | 1,623 | 1,623 | CONTRIBUTOR | null | it seems the requirements.txt was missing `torch`, which is seemingly required. Just adding it.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12040/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12040/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12040",
"html_url": "https://github.com/huggingface/transformers/pull/12040",
"diff_url": "https://github.com/huggingface/transformers/pull/12040.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12040.patch",
"merged_at": 1623157355000
} |
https://api.github.com/repos/huggingface/transformers/issues/12039 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12039/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12039/comments | https://api.github.com/repos/huggingface/transformers/issues/12039/events | https://github.com/huggingface/transformers/issues/12039 | 912,301,044 | MDU6SXNzdWU5MTIzMDEwNDQ= | 12,039 | pipelines should allow passing in tokenizer arguments | {
"login": "EtienneT",
"id": 265924,
"node_id": "MDQ6VXNlcjI2NTkyNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/265924?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EtienneT",
"html_url": "https://github.com/EtienneT",
"followers_url": "https://api.github.com/users/EtienneT/followers",
"following_url": "https://api.github.com/users/EtienneT/following{/other_user}",
"gists_url": "https://api.github.com/users/EtienneT/gists{/gist_id}",
"starred_url": "https://api.github.com/users/EtienneT/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EtienneT/subscriptions",
"organizations_url": "https://api.github.com/users/EtienneT/orgs",
"repos_url": "https://api.github.com/users/EtienneT/repos",
"events_url": "https://api.github.com/users/EtienneT/events{/privacy}",
"received_events_url": "https://api.github.com/users/EtienneT/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi,\r\n\r\nyou can pass a 2-tuple `(tokenizer_name, tokenizer_kwargs)` to achieve this. E.g.:\r\n```python\r\nfrom transformers import pipeline\r\nclassifier = pipeline('sentiment-analysis', tokenizer=(tokenizer_name, {\"padding\": True, \"truncation\": True, \"max_length\": 512}), device=0)\r\n```\r\n@patrickvonplaten Any particular reason why this is not documented? ",
"@mariosasko thanks for your response! I tried this, but unfortunately I get the same error.\r\n```\r\nfrom transformers import pipeline\r\nmodel_name = 'distilbert-base-uncased-finetuned-sst-2-english'\r\nclassifier = pipeline('sentiment-analysis', model=model_name, tokenizer=(model_name, {\"padding\": True, \"truncation\": True, \"max_length\": 512}), device=0)\r\n```\r\n\r\nThe error:\r\n\r\n> Token indices sequence length is longer than the specified maximum sequence length for this model (1055 > 512). Running this sequence through the model will result in indexing errors\r\n\r\nThanks,",
"My bad. Just checked the source. This should work:\r\n```python\r\nfrom transformers import pipeline\r\nclassifier = pipeline('sentiment-analysis', device=0)\r\nclassifier(texts, padding=True, truncation=True, max_length=512)\r\n```",
"This works! I was sure I had tried that, but it seems not.\r\n\r\nThank you!"
] | 1,622 | 1,622 | 1,622 | NONE | null | # 🚀 Feature request
It should be possible to pass in additional arguments for the tokenizer in the pipeline constructor.
Something like this:
```
from transformers import pipeline
classifier = pipeline('sentiment-analysis', padding=True, truncation=True, max_length=512, device=0)
```
## Motivation
For example for a sentiment-analysis pipeline, if the model has a maximum number of tokens and you pass-in larger text than that to the pipeline it will make the pipeline crash. It would be really nice to be able to provide additional arguments for the tokenizer like padding=True, truncation=True, max_length=512 for example. The only workaround I found was to create the tokenizer and model separately and provide the arguments to the tokenizer directly.
Here is how I do it right now:
```
pt_batch = tokenizer(texts, padding=True, truncation=True, max_length=512, return_tensors="pt")
for x in pt_batch.keys():
pt_batch[x] = pt_batch[x].to('cuda')
pt_outputs = model(**pt_batch)
```
and how I would prefer to be able to do it instead:
```
from transformers import pipeline
classifier = pipeline('sentiment-analysis', padding=True, truncation=True, max_length=512, device=0)
preds = classifier(texts)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12039/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12039/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12038 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12038/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12038/comments | https://api.github.com/repos/huggingface/transformers/issues/12038/events | https://github.com/huggingface/transformers/issues/12038 | 912,264,879 | MDU6SXNzdWU5MTIyNjQ4Nzk= | 12,038 | Support for pointer-generator architectures. | {
"login": "AmirAktify",
"id": 62885948,
"node_id": "MDQ6VXNlcjYyODg1OTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/62885948?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmirAktify",
"html_url": "https://github.com/AmirAktify",
"followers_url": "https://api.github.com/users/AmirAktify/followers",
"following_url": "https://api.github.com/users/AmirAktify/following{/other_user}",
"gists_url": "https://api.github.com/users/AmirAktify/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AmirAktify/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmirAktify/subscriptions",
"organizations_url": "https://api.github.com/users/AmirAktify/orgs",
"repos_url": "https://api.github.com/users/AmirAktify/repos",
"events_url": "https://api.github.com/users/AmirAktify/events{/privacy}",
"received_events_url": "https://api.github.com/users/AmirAktify/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,622 | 1,626 | 1,626 | NONE | null | # 🚀 Feature request
Is there interest in adding pointer generator architecture support to huggingface? These are currently supported in [fairseq](https://github.com/pytorch/fairseq/blob/master/examples/pointer_generator/README.md), and in general should not be terrible to add for most encoder-decoder seq2seq tasks and modeks.
## Motivation
Pointer-generator architectures generally give SOTA results for extractive summarization, as well as for semantic parsing. (See for instance this paper [https://arxiv.org/pdf/2001.11458.pdf](url)).
## Your contribution
If there is interest but not bandwidth from huggingface members, I could try to add pointer-generator support for a specific architecture such as the T5 and see how hard it would be to port over fairseq's implementation for instance.
Note: apologies also if I've missed where huggingface supports it already.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12038/reactions",
"total_count": 7,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12038/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12037 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12037/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12037/comments | https://api.github.com/repos/huggingface/transformers/issues/12037/events | https://github.com/huggingface/transformers/issues/12037 | 912,253,837 | MDU6SXNzdWU5MTIyNTM4Mzc= | 12,037 | Using the latest DPR checkpoint available with HuggingFace DPR class | {
"login": "shamanez",
"id": 16892570,
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shamanez",
"html_url": "https://github.com/shamanez",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"repos_url": "https://api.github.com/users/shamanez/repos",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi ! Yes you should be fined. Under the hood it's actually the same tokenizer as `bert-base-uncased`.\r\n\r\nAlso it would be nice to add the new DPR checkpoints on the Hub as well.\r\n\r\nWhat changes did you have to do in the convert_dpr.py file ?",
"Perfect and thanks @lhoestq \r\n\r\n Actually, it was a very minor change. With the current transformer version, it gives an error [in this line](https://github.com/huggingface/transformers/blob/finalize_rag/src/transformers/convert_dpr_original_checkpoint_to_pytorch.py#L71), saying no positional_id key in the state_dict. It simply worked with setting the **strict= False**.\r\n \r\n But for clarity I changed as follows (check newly added line :)):\r\n \r\n```\r\n class DPRQuestionEncoderState(DPRState):\r\n def load_dpr_model(self):\r\n model = DPRQuestionEncoder(DPRConfig(**BertConfig.get_config_dict(\"bert-base-uncased\")[0]))\r\n print(\"Loading DPR biencoder from {}\".format(self.src_file))\r\n saved_state = load_states_from_checkpoint(self.src_file)\r\n encoder, prefix = model.question_encoder, \"question_model.\"\r\n model_state_dict = encoder.state_dict()\r\n state_dict = {}\r\n\r\n for key, value in saved_state.model_dict.items():\r\n if key.startswith(prefix):\r\n key = key[len(prefix) :]\r\n if not key.startswith(\"encode_proj.\"):\r\n key = \"bert_model.\" + key\r\n state_dict[key] = value\r\n\r\n #newly added\r\n for k , v in model_state_dict.items():\r\n if not k in state_dict:\r\n print(\"warnning can't find key:\",k)\r\n state_dict[k]=v\r\n\r\n #encoder.save_pretrained(save_directory='./', state_dict=state_dict) #no need to modify\r\n encoder.load_state_dict(state_dict)\r\n return model\r\n```\r\n"
] | 1,622 | 1,623 | 1,623 | CONTRIBUTOR | null | Hi,
There's a new DPR checkpoint available in [DPR Orignal repositary](https://github.com/facebookresearch/DPR#new-march-2021-retrieval-model), which shows nice improvements. I did convert the DPR checkpoint using the conver_dpr.py (did a minor modification) and it is working fine.
I have one question regarding the correct tokenizer that should use for the question_encder and context_enocder. Since I can load the tokenizer from the following paths (AutoTokenizer.from_pretrained), I assumed both of these tokenizers behave in the same way (PreTrainedTokenizerFast).
1. **facebook/dpr-question_encoder-multiset-base**
2. **facebook/dpr-question_encoder-single-nq-base**
So, there won't be any problem if I use a tokenizer loaded from any of the above paths with the new checkpoint right (since DPR uses HuggingFace Tokenizers)?
@lhoestq
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12037/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12037/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12036 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12036/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12036/comments | https://api.github.com/repos/huggingface/transformers/issues/12036/events | https://github.com/huggingface/transformers/issues/12036 | 912,189,772 | MDU6SXNzdWU5MTIxODk3NzI= | 12,036 | I cannot import deepsepped | {
"login": "Arij-Aladel",
"id": 68355048,
"node_id": "MDQ6VXNlcjY4MzU1MDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/68355048?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Arij-Aladel",
"html_url": "https://github.com/Arij-Aladel",
"followers_url": "https://api.github.com/users/Arij-Aladel/followers",
"following_url": "https://api.github.com/users/Arij-Aladel/following{/other_user}",
"gists_url": "https://api.github.com/users/Arij-Aladel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Arij-Aladel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Arij-Aladel/subscriptions",
"organizations_url": "https://api.github.com/users/Arij-Aladel/orgs",
"repos_url": "https://api.github.com/users/Arij-Aladel/repos",
"events_url": "https://api.github.com/users/Arij-Aladel/events{/privacy}",
"received_events_url": "https://api.github.com/users/Arij-Aladel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Sorry my bad I have not noticed that thee was changes on transformers and should upgrade the installation"
] | 1,622 | 1,622 | 1,622 | NONE | null | ```
from transformers.file_utils import CONFIG_NAME
from transformers.deepspeed import deepspeed_config, is_deepspeed_zero3_enabled
```
why I am getting this error ? even the file_utils and deepspeed are in the same directory I can import the first one but not the second which is not understandable for me
> ---------------------------------------------------------------------------
> ModuleNotFoundError Traceback (most recent call last)
> File /data/home/admin/.conda/envs/cmr_env/lib/python3.8/site-packages/IPython/core/interactiveshell.py, in run_code:
> Line 3437: exec(code_obj, self.user_global_ns, self.user_ns)
>
> In [12]:
> Line 1: from transformers.deepspeed import deepspeed_config, is_deepspeed_zero3_enabled
>
> ModuleNotFoundError: No module named 'transformers.deepspeed'
> --------------------------------------------------------------------------- | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12036/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12036/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12035 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12035/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12035/comments | https://api.github.com/repos/huggingface/transformers/issues/12035/events | https://github.com/huggingface/transformers/pull/12035 | 912,172,940 | MDExOlB1bGxSZXF1ZXN0NjYyMzU5OTQy | 12,035 | Fixed Typo in modeling_bart.py | {
"login": "ceevaaa",
"id": 36535845,
"node_id": "MDQ6VXNlcjM2NTM1ODQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/36535845?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ceevaaa",
"html_url": "https://github.com/ceevaaa",
"followers_url": "https://api.github.com/users/ceevaaa/followers",
"following_url": "https://api.github.com/users/ceevaaa/following{/other_user}",
"gists_url": "https://api.github.com/users/ceevaaa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ceevaaa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ceevaaa/subscriptions",
"organizations_url": "https://api.github.com/users/ceevaaa/orgs",
"repos_url": "https://api.github.com/users/ceevaaa/repos",
"events_url": "https://api.github.com/users/ceevaaa/events{/privacy}",
"received_events_url": "https://api.github.com/users/ceevaaa/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks a lot for fixing this!\r\n\r\nCould you run `make fix-copies` and then push again?"
] | 1,622 | 1,623 | 1,623 | CONTRIBUTOR | null | # What does this PR do?
Fixes #11895
Fixed Typo `(seq_len, batch, embed_dim)` to `(batch, seq_len, embed_dim)` in line
[373](https://github.com/huggingface/transformers/blob/996a315e76f6c972c854990e6114226a91bc0a90/src/transformers/models/bart/modeling_bart.py#L373) and [376](https://github.com/huggingface/transformers/blob/996a315e76f6c972c854990e6114226a91bc0a90/src/transformers/models/bart/modeling_bart.py#L376) as discussed [here](https://github.com/huggingface/transformers/issues/11895).
@NielsRogge
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12035/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12035/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12035",
"html_url": "https://github.com/huggingface/transformers/pull/12035",
"diff_url": "https://github.com/huggingface/transformers/pull/12035.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12035.patch",
"merged_at": 1623046465000
} |
https://api.github.com/repos/huggingface/transformers/issues/12034 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12034/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12034/comments | https://api.github.com/repos/huggingface/transformers/issues/12034/events | https://github.com/huggingface/transformers/issues/12034 | 912,078,584 | MDU6SXNzdWU5MTIwNzg1ODQ= | 12,034 | After loading fine-tuned model from local and use it for prediction, it continue training from scratch again! | {
"login": "Fushier",
"id": 48719664,
"node_id": "MDQ6VXNlcjQ4NzE5NjY0",
"avatar_url": "https://avatars.githubusercontent.com/u/48719664?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Fushier",
"html_url": "https://github.com/Fushier",
"followers_url": "https://api.github.com/users/Fushier/followers",
"following_url": "https://api.github.com/users/Fushier/following{/other_user}",
"gists_url": "https://api.github.com/users/Fushier/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Fushier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Fushier/subscriptions",
"organizations_url": "https://api.github.com/users/Fushier/orgs",
"repos_url": "https://api.github.com/users/Fushier/repos",
"events_url": "https://api.github.com/users/Fushier/events{/privacy}",
"received_events_url": "https://api.github.com/users/Fushier/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"No, this means you are loading a model that hasn't been fine-tuned. The model present in in your `./model` directory seems to have a pretraining head which is discarded, and a new token classification head is instantiated.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,622 | 1,626 | 1,626 | NONE | null | transformers: 4.6.1
torch: 1.3
gpu: k40m * 2
datasets: msra_ner
model: hfl/chinese-bert-wwm
I'm fine-tuning a model for token classification, and after training, I save the model:
`trainer.save_model('./model')
trainer.save_metrics('./model')`
Now I load the saved model :
`
tokenizer = AutoTokenizer.from_pretrained("./model")
config = transformers.AutoConfig.from_pretrained("./model")
model = AutoModelForTokenClassification.from_pretrained("./model", config=config)
args = TrainingArguments(
output_dir='./results'
)
trainer = Trainer(
model,
args,
data_collator=data_collator,
tokenizer=tokenizer,
)
`
And then predict with test dataset:
`
predictions, labels, metrics = trainer.predict(tokenized_datasets)
`
But in terminal, I get:
Some weights of the model checkpoint at hfl/chinese-bert-wwm were not used when initializing BertForTokenClassification: ['cls.predictions.transform.dense.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.bias', 'cls.predictions.decoder.weight']
- This IS expected if you are initializing BertForTokenClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertForTokenClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of BertForTokenClassification were not initialized from the model checkpoint at hfl/chinese-bert-wwm and are newly initialized: ['classifier.bias', 'classifier.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
0%| | 0/28130 [00:00<?, ?it/s]
It seems like that the model starts training again?
@sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12034/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12034/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12033 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12033/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12033/comments | https://api.github.com/repos/huggingface/transformers/issues/12033/events | https://github.com/huggingface/transformers/issues/12033 | 912,062,427 | MDU6SXNzdWU5MTIwNjI0Mjc= | 12,033 | A bug of modeling_wav2vec2.py:1033 line | {
"login": "zhangbo2008",
"id": 35842504,
"node_id": "MDQ6VXNlcjM1ODQyNTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/35842504?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhangbo2008",
"html_url": "https://github.com/zhangbo2008",
"followers_url": "https://api.github.com/users/zhangbo2008/followers",
"following_url": "https://api.github.com/users/zhangbo2008/following{/other_user}",
"gists_url": "https://api.github.com/users/zhangbo2008/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhangbo2008/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhangbo2008/subscriptions",
"organizations_url": "https://api.github.com/users/zhangbo2008/orgs",
"repos_url": "https://api.github.com/users/zhangbo2008/repos",
"events_url": "https://api.github.com/users/zhangbo2008/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhangbo2008/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Great catch @zhangbo2008!\r\n\r\nWould you like to open a PR to fix it?",
"ok \r\ni see you have fixed it in the latest version thanks for your works."
] | 1,622 | 1,624 | 1,624 | NONE | null | transformers/models/wav2vec2/modeling_wav2vec2.py
now is :
>>> import torch
>>> from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
>>> from datasets import load_dataset
>>> import soundfile as sf
>>> processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h")
>>> model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h")
>>> def map_to_array(batch):
>>> speech, _ = sf.read(batch["file"])
>>> batch["speech"] = speech
>>> return batch
>>> ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
>>> ds = ds.map(map_to_array)
>>> input_values = processor(ds["speech"][0], return_tensors="pt").input_values # Batch size 1
>>> logits = model(input_values).logits
>>> predicted_ids = torch.argmax(logits, dim=-1)
>>> transcription = processor.decode(predicted_ids[0])
>>> # compute loss
>>> target_transcription = "A MAN SAID TO THE UNIVERSE SIR I EXIST"
>>> # wrap processor as target processor to encode labels
>>> with processor.as_target_processor():
>>> labels = processor(transcription, return_tensors="pt").input_ids
>>> loss = model(input_values, labels=labels).loss
it should be
>>> with processor.as_target_processor():
>>> labels = processor(target_transcription , return_tensors="pt").input_ids
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
@patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12033/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12033/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12032 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12032/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12032/comments | https://api.github.com/repos/huggingface/transformers/issues/12032/events | https://github.com/huggingface/transformers/issues/12032 | 912,042,919 | MDU6SXNzdWU5MTIwNDI5MTk= | 12,032 | Documents of `past_key_values` in input and output for `PegasusModel` are not aligned | {
"login": "ryangawei",
"id": 25638070,
"node_id": "MDQ6VXNlcjI1NjM4MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/25638070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ryangawei",
"html_url": "https://github.com/ryangawei",
"followers_url": "https://api.github.com/users/ryangawei/followers",
"following_url": "https://api.github.com/users/ryangawei/following{/other_user}",
"gists_url": "https://api.github.com/users/ryangawei/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ryangawei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ryangawei/subscriptions",
"organizations_url": "https://api.github.com/users/ryangawei/orgs",
"repos_url": "https://api.github.com/users/ryangawei/repos",
"events_url": "https://api.github.com/users/ryangawei/events{/privacy}",
"received_events_url": "https://api.github.com/users/ryangawei/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @AlfredWGA , you are right ! The description of input `past_key_values` is old and should be updated. \r\nThe correct shape is as described by the output docstring. Thanks for reporting!"
] | 1,622 | 1,623 | 1,623 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: Linux-4.15.0-144-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
Pegasus @patrickvonplaten, @patil-suraj
## Information
According to the documentation of `PegasusModel`, the `past_key_values` for input and output have different shape,
```
past_key_values (Tuple[Tuple[torch.Tensor]] of length config.n_layers with each tuple having 2 tuples each of which has 2 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) –
Contains precomputed key and value hidden-states of the attention blocks. Can be used to speed up decoding.
```
```
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) –
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
```
I'm trying to reproduce the behaviour of passing `past_key_values` as inputs, so I construct a `dummy_past_key_values` to feed into the model,
```
decoder_seq_length = decoder_input_ids.shape[1]
dummy_past_value_keys = torch.ones(size=[1, model.config.num_attention_heads, decoder_seq_length-1, int(model.config.d_model / model.config.num_attention_heads)], dtype=torch.float32)
pkv_tuple = ((dummy_past_value_keys, dummy_past_value_keys), (dummy_past_value_keys, dummy_past_value_keys))
pkv_tuple = tuple([pkv_tuple] * model.config.num_hidden_layers)
outputs = model(input_ids, decoder_input_ids=decoder_input_ids, past_key_values=pkv_tuple)
```
Then I got the following error,
```
AttributeError: 'tuple' object has no attribute 'shape'
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-22-be142139ae01> in <module>
----> 1 outputs = model(input_ids, decoder_input_ids=decoder_input_ids, past_key_values=pkv_tuple)
~/anaconda3/envs/wga/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
~/anaconda3/envs/wga/lib/python3.8/site-packages/transformers/models/pegasus/modeling_pegasus.py in forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)
1267 )
1268
-> 1269 outputs = self.model(
1270 input_ids,
1271 attention_mask=attention_mask,
~/anaconda3/envs/wga/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
~/anaconda3/envs/wga/lib/python3.8/site-packages/transformers/models/pegasus/modeling_pegasus.py in forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict)
1151
1152 # decoder outputs consists of (dec_features, past_key_value, dec_hidden, dec_attn)
-> 1153 decoder_outputs = self.decoder(
1154 input_ids=decoder_input_ids,
1155 attention_mask=decoder_attention_mask,
~/anaconda3/envs/wga/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
~/anaconda3/envs/wga/lib/python3.8/site-packages/transformers/models/pegasus/modeling_pegasus.py in forward(self, input_ids, attention_mask, encoder_hidden_states, encoder_attention_mask, head_mask, cross_attn_head_mask, past_key_values, inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict)
941
942 # past_key_values_length
--> 943 past_key_values_length = past_key_values[0][0].shape[2] if past_key_values is not None else 0
944
945 if inputs_embeds is None:
AttributeError: 'tuple' object has no attribute 'shape'
```
And if I make a `dummy_past_value_keys` using the shape described by the output document,
```
pkv_tuple = (dummy_past_value_keys,) * 4
pkv_tuple = tuple([pkv_tuple] * model.config.num_hidden_layers)
outputs = model(input_ids, decoder_input_ids=decoder_input_ids, past_key_values=pkv_tuple)
```
No error shows up. Maybe I misunderstood the documentation, but from my perspective `Tuple of length config.n_layers with each tuple having 2 tuples each of which has 2 tensors` seems more like a `Tuple[Tuple[Tuple[torch.Tensor]]]`, and the description itself is quite confusing. The code that throws the exception `past_key_values_length = past_key_values[0][0].shape[2] ` tries to access the tensor's shape, which can be only done with `Tuple[Tuple[torch.Tensor]]`, and it makes more sense that the input and output `past_key_values` have the same shape. I wonder if the description of input `past_key_values` is the old version and hasn't been updated? Thank you.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12032/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12032/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12031 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12031/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12031/comments | https://api.github.com/repos/huggingface/transformers/issues/12031/events | https://github.com/huggingface/transformers/pull/12031 | 912,035,007 | MDExOlB1bGxSZXF1ZXN0NjYyMjQwNDYz | 12,031 | Layoutlmv2 port with testing | {
"login": "raguiar2",
"id": 21694506,
"node_id": "MDQ6VXNlcjIxNjk0NTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/21694506?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/raguiar2",
"html_url": "https://github.com/raguiar2",
"followers_url": "https://api.github.com/users/raguiar2/followers",
"following_url": "https://api.github.com/users/raguiar2/following{/other_user}",
"gists_url": "https://api.github.com/users/raguiar2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/raguiar2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/raguiar2/subscriptions",
"organizations_url": "https://api.github.com/users/raguiar2/orgs",
"repos_url": "https://api.github.com/users/raguiar2/repos",
"events_url": "https://api.github.com/users/raguiar2/events{/privacy}",
"received_events_url": "https://api.github.com/users/raguiar2/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Looking good! Feel free to ping me when you want a review :)",
"@LysandreJik so, the logic in this PR is ready to be reviewed (mostly from https://github.com/microsoft/unilm with a few bug fixes). The one problem I'm having with this PR is that the detectron2 library requires torch to be **already installed** to build the wheel - and doesn't list it as a dependency!\r\n\r\n I have searched through a bunch of python documentation, but I'm still not sure how we can force the torch install to occur before the detectron2 one in setup.py, so any help here would be appreciated if you've seen something like this before. I have also filed an issue (https://github.com/facebookresearch/detectron2/issues/3124) but I'm not sure it's in the scope of the library to support install without torch already on the system. \r\n \r\nI could edit the CI instead to add torch before running setup.py, but that seems like it would be error-prone down the road if people are trying to install without pytorch on their system and transformers fails to build. What are your thoughts on how to best solve this issue?\r\n\r\nCopying the parts of the detectron library I needed to make layoutlmv2 work was something else I considered, but it is a substantial chunk of the detectron2 code, so I think it's better to just use the library.",
"@NielsRogge, you have played with LayoutLM in the past, do you want to give this PR a look?\r\n\r\nIf the install needs to be done as a two-step process (first all deps, then `detection2`) then I would advocate for not putting it in the `setup.py`, and thoroughly document the behavior, both in the documentation and in the code with the appropriate errors raised when a detectron-less install is detected.",
"Any idea when this PR is going to be merged? I am working on TF version of layoutlm 2 and I'd like for this to be merged before I create a branch for layoutlm 2 in TF",
"Hi @atahmasb, the author of this PR did not reply yet regarding my review, but maybe I could work on this in a new PR.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,622 | 1,628 | 1,628 | NONE | null | # What does this PR do?
Trying to open up my own PR to figure out what's wrong with the install on https://github.com/huggingface/transformers/pull/11933, as it seems to be working locally but not on CircleCI
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
#11932
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. [microsoft/unilm#325](https://github.com/microsoft/unilm/issues/325)
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). Not yet
- [x] Did you write any new necessary tests?
## Who can review?
@LysandreJik
@inproceedings{Xu2020LayoutLMv2MP,
title = {LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding},
author = {Yang Xu and Yiheng Xu and Tengchao Lv and Lei Cui and Furu Wei and Guoxin Wang and Yijuan Lu and Dinei Florencio and Cha Zhang and Wanxiang Che and Min Zhang and Lidong Zhou},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics (ACL) 2021},
year = {2021},
month = {August},
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12031/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12031/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12031",
"html_url": "https://github.com/huggingface/transformers/pull/12031",
"diff_url": "https://github.com/huggingface/transformers/pull/12031.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12031.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/12030 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12030/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12030/comments | https://api.github.com/repos/huggingface/transformers/issues/12030/events | https://github.com/huggingface/transformers/issues/12030 | 911,726,385 | MDU6SXNzdWU5MTE3MjYzODU= | 12,030 | xla_spawn.py: Cannot load large (~1GB) optimizer.pt from checkpoint | {
"login": "BassaniRiccardo",
"id": 48254418,
"node_id": "MDQ6VXNlcjQ4MjU0NDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/48254418?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BassaniRiccardo",
"html_url": "https://github.com/BassaniRiccardo",
"followers_url": "https://api.github.com/users/BassaniRiccardo/followers",
"following_url": "https://api.github.com/users/BassaniRiccardo/following{/other_user}",
"gists_url": "https://api.github.com/users/BassaniRiccardo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BassaniRiccardo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BassaniRiccardo/subscriptions",
"organizations_url": "https://api.github.com/users/BassaniRiccardo/orgs",
"repos_url": "https://api.github.com/users/BassaniRiccardo/repos",
"events_url": "https://api.github.com/users/BassaniRiccardo/events{/privacy}",
"received_events_url": "https://api.github.com/users/BassaniRiccardo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"That is a weird error. I can't reproduce on my side, using an n2-standard-8 (8 vCPUs, 32 GB memory) with the TPUs.\r\nThere is no alternative to load the optimizer state in each process since each of the TPU cores will need it, and it needs to pass through the CPU sadly because PyTorch XLA does not handle loading it directly on an XLA device.",
"Thank you very much for the quick response! This seems to suggest it cannot be a RAM issue. I don't know how to check what happens exactly, the only error message I get is:\r\n\r\n torch.multiprocessing.spawn.ProcessExitedException: process 6 terminated with signal SIGKILL\r\n\r\nI am trying to load from \"more advanced\" checkpoints (not after 10 steps but after 10k, 30k), but that should not make any difference.\r\n\r\nFor the rest, there are some differences between the original xla_spawn.py code and my script, but the fact that it works with a smaller model puzzles me.\r\n\r\nAny idea about what else could go wrong when loading the optimizer and/or how to get a more specific error?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,622 | 1,626 | 1,626 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.7.0.dev0
- Platform: Linux-4.19.0-14-cloud-amd64-x86_64-with-debian-10.9
- Python version: 3.7.3
- PyTorch version (GPU?): 1.8.1+cu102 (False, using TPU)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no, using TPU
- Using distributed or parallel set-up in script?: yes, v3-8 TPU
### Who can help
@sgugger
## Information
Model I am using (BertForMaskedML):
The tasks I am working on is:
* [ ] Custom Bert Pretraining (MLM only)
The problem arises when using:
* [ ] my own modified scripts: (scroll at the bottom for full script)
I am using a modified version of xla_spawn.py, written following:
https://wandb.ai/darshandeshpande/marathi-distilbert/reports/Training-Devanagari-Language-Models-on-TPU-using-Hugging-Face-and-PyTorch--Vmlldzo1MDgyMDQ
The goals are:
1. On-the-fly tokenization (working)
2. Avoid memory waste by wrapping the model with a xmp.MpModelWrapper (not really sure about the actual efficiency of this, but at least no errors result from this modification alone)
**Training without resuming from checkpoint works fine.**
**Also loading checkpoint for a small-bert version (optimizer size ~35MB) works fine.**
**When trying to load a checkpoint for bert-base (optimizer size ~1GB) the program crashes at the line**:
```ruby
optimizer_state = torch.load(os.path.join(checkpoint, "optimizer.pt"), map_location="cpu")
```
of Trainer.py
It is possible it is only a RAM issue (?), but in that case maybe it could be memory optimized.
I am working with an e2-highmem-4 (4 vCPUs, 32 GB memory 1TB persistent disk), accelerated by a v3-8 TPU on GCP.
If torch_load(map_location="cpu") is called 8 times (one per core), it takes around 1.5 x 8 = 12GB, so this should not be a problem, unless a significant amount of RAM is already used or something weird happens.
However, in the small-bert case the same code works.
**If memory is actually the case, would it be possible to store the optimizer (and the remaining checkpoint data) only once?**
**(I guess loading directly to TPU is not an option?)**
## To reproduce
Steps to reproduce the behavior:
1. run xla_spawn with run_mlm.py for bert-base pretraining:
```ruby
python xla_spawn.py \
--num_cores=8 \
language-modeling/run_mlm.py \
--train_file $TRAIN_FILE \
--model_name_or_path bert-base-uncased \
--output_dir $OUTPUT_DIR \
--overwrite_output_dir False \
--do_train True \
--do_eval False \
--save_steps 10 \
```
2. interrupt the training after at least one checkpoint has been created
3. run xla_spawn with run_mlm.py for bert-base pretraining resuming checkpoint:
```ruby
python xla_spawn.py \
--num_cores=8 \
language-modeling/run_mlm.py \
--train_file $TRAIN_FILE \
--model_name_or_path bert-base-uncased \
--output_dir $NEW_OUTPUT_DIR \
--overwrite_output_dir False \
--do_train True \
--do_eval False \
--save_steps 10 \
--resume_from_checkpoint $CHECKPOINT_DIR \
```
## Expected behavior
resume_from_checkpoint works in the bert_base case as in the small-bert case.
## My Script
```ruby
import torch_xla.core.xla_model as xm
import torch_xla.distributed.parallel_loader as pl
import torch_xla.distributed.xla_multiprocessing as xmp
import logging
import math
import os
import sys
import json
import pickle
from pathlib import Path
from dataclasses import dataclass, field
from typing import Optional
import datasets
from datasets import load_dataset
import transformers
from transformers import (
CONFIG_MAPPING,
MODEL_FOR_MASKED_LM_MAPPING,
AutoConfig,
AutoModelForMaskedLM,
AutoTokenizer,
DataCollatorForLanguageModeling,
HfArgumentParser,
Trainer,
TrainingArguments,
set_seed,
)
from transformers.trainer_utils import get_last_checkpoint, is_main_process
from transformers.utils import check_min_version
from transformers import BertConfig, BertTokenizerFast, BertForMaskedLM
# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
check_min_version("4.7.0.dev0")
# Set up logger: writing both to file and to std output
file_handler = logging.FileHandler(filename='tpu_training_logger')
file_handler.setLevel(logging.INFO)
stdout_handler = logging.StreamHandler(sys.stdout)
stdout_handler.setLevel(logging.INFO)
handlers = [file_handler, stdout_handler]
logging.basicConfig(
format='%(asctime)s,%(msecs)d %(name)s %(levelname)s %(message)s',
datefmt='%H:%M:%S',
level=logging.INFO,
handlers=handlers
)
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
# avoid creating useless and space-consuming copies of the data for each tpu-core
SERIAL_EXEC = xmp.MpSerialExecutor()
@dataclass
class ModelArguments:
"""
Arguments pertaining to which model/config/tokenizer we are going to fine-tune, or train from scratch.
"""
model_type: Optional[str] = field(
default="uncased_baseline",
metadata={"help" : "uncased_baseline, cased_baseline, or model"},
)
cache_dir: Optional[str] = field(
default=None,
metadata={"help": "Where do you want to store the pretrained models downloaded from huggingface.co"},
)
use_fast_tokenizer: bool = field(
default=True,
metadata={"help": "Whether to use one of the fast tokenizer (backed by the tokenizers library) or not."},
)
model_revision: str = field(
default="main",
metadata={"help": "The specific model version to use (can be a branch name, tag name or commit id)."},
)
use_auth_token: bool = field(
default=False,
metadata={
"help": "Will use the token generated when running `transformers-cli login` (necessary to use this script "
"with private models)."
},
)
@dataclass
class DataTrainingArguments:
"""
Arguments pertaining to what data we are going to input our model for training and eval.
"""
train_file: Optional[str] = field(default=None, metadata={"help": "The input training data file (a text file)."})
validation_file: Optional[str] = field(
default=None,
metadata={"help": "An optional input evaluation data file to evaluate the perplexity on (a text file)."},
)
overwrite_cache: bool = field(
default=False, metadata={"help": "Overwrite the cached training and evaluation sets"}
)
validation_split_percentage: Optional[int] = field(
default=5,
metadata={
"help": "The percentage of the train set used as validation set in case there's no validation split"
},
)
max_seq_length: Optional[int] = field(
default=512,
metadata={
"help": "The maximum total input sequence length after tokenization. Sequences longer "
"than this will be truncated."
},
)
preprocessing_num_workers: Optional[int] = field(
default=None,
metadata={"help": "The number of processes to use for the preprocessing."},
)
mlm_probability: float = field(
default=0.15, metadata={"help": "Ratio of tokens to mask for masked language modeling loss"}
)
line_by_line: bool = field(
default=True,
metadata={"help": "Whether distinct lines of text in the dataset are to be handled as distinct sequences."},
)
pad_to_max_length: bool = field(
default=True,
metadata={
"help": "Whether to pad all samples to `max_seq_length`. "
"If False, will pad the samples dynamically when batching to the maximum length in the batch."
},
)
max_train_samples: Optional[int] = field(
default=None,
metadata={
"help": "For debugging purposes or quicker training, truncate the number of training examples to this "
"value if set."
},
)
max_eval_samples: Optional[int] = field(
default=None,
metadata={
"help": "For debugging purposes or quicker training, truncate the number of evaluation examples to this "
"value if set."
},
)
def __post_init__(self):
if self.train_file is None and self.validation_file is None:
raise ValueError("Need a training/validation file.")
else:
if self.train_file is not None:
extension = self.train_file.split(".")[-1]
assert extension in ["csv", "json", "txt"], "`train_file` should be a csv, a json or a txt file."
if self.validation_file is not None:
extension = self.validation_file.split(".")[-1]
assert extension in ["csv", "json", "txt"], "`validation_file` should be a csv, a json or a txt file."
def add_custom_args(hf_parser):
hf_parser.add_argument(
'--icebert_folder',
type=str,
default="/home/riccardobassani17/bucket/transformers/examples/pytorch/language-modeling/icebert",
help="Path to folder containing icebert utils and files"
)
hf_parser.add_argument(
'--config_file',
type=str,
default="/home/riccardobassani17/bucket/transformers/examples/pytorch/language-modeling/icebert/config_files/small_bert.json",
help="Path of the BertConfig json file, relative to the icebert folder"
)
return hf_parser
def get_tokenized_dataset():
tokenized_datasets = datasets.load_dataset('text', data_files=data_files, cache_dir=cache_dir)
def tokenize_function(examples):
# Remove empty lines
examples["text"] = [line for line in examples["text"] if len(line) > 0 and not line.isspace()]
return tokenizer(
examples["text"],
padding="max_length",
truncation=True,
max_length=max_len,
return_special_tokens_mask=True,
)
return tokenized_datasets.with_transform(tokenize_function)
def get_data_collator():
return DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm_probability=0.15)
def map_fn(index):
# See all possible arguments in src/transformers/training_args.py
# or by passing the --help flag to this script.
# We now keep distinct sets of args, for a cleaner separation of concerns.
parser = add_custom_args( HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments)) )
if len(sys.argv) == 2 and sys.argv[1].endswith(".json"):
# If we pass only one argument to the script and it's the path to a json file,
# let's parse it to get our arguments.
model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))
else:
model_args, data_args, training_args, args = parser.parse_args_into_dataclasses()
logger.info(f"parser built")
# load and instantiate tokenizer
global tokenizer
tokenizer = BertTokenizerFast.from_pretrained( (Path(args.icebert_folder) / (str(data_args.max_seq_length) + "_tokenizers") / (model_args.model_type + "_tokenizer")))
# load and instantiate configuration file
with open(args.config_file, 'r') as fp:
config_dict = json.load(fp)
config_kwargs = {
"cache_dir": model_args.cache_dir,
"revision": model_args.model_revision,
"use_auth_token": True if model_args.use_auth_token else None,
}
config = BertConfig(vocab_size=tokenizer.vocab_size, max_position_embeddings=data_args.max_seq_length, \
hidden_size=config_dict["hidden_size"], num_hidden_layers=config_dict["num_hidden_layers"], \
num_attention_heads=config_dict["num_attention_heads"], intermediate_size=config_dict["intermediate_size"], \
hidden_act=config_dict["hidden_act"], hidden_dropout_prob=config_dict["hidden_dropout_prob"], \
attention_probs_dropout_prob=config_dict["attention_probs_dropout_prob"], type_vocab_size=config_dict["type_vocab_size"], \
initializer_range=config_dict["initializer_range"], layer_norm_eps=config_dict["layer_norm_eps"], **config_kwargs)
# load and instantiate model
# IMPORTANT: the model is wrapped using the xmp.MpModelWrapper, which loads the model only once, in the global scope
model = xmp.MpModelWrapper(BertForMaskedLM(config))
logger.info(f"tokenizer and model instantiated")
# move model to device
device = xm.xla_device()
model.to(device)
xm.rendezvous("Model moved to device")
# prepare dataset and datacollator for on-the-fly tokenization and masking
global data_files
data_files = {"train": data_args.train_file}
global max_len
max_len = data_args.max_seq_length
global cache_dir
cache_dir = model_args.cache_dir
tokenized_datasets = SERIAL_EXEC.run(get_tokenized_dataset)
xm.rendezvous("Tokenized dataset loaded")
data_collator = SERIAL_EXEC.run(get_data_collator)
xm.rendezvous("DataCollator loaded")
# handle possible checkpoints
last_checkpoint = None
if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir:
last_checkpoint = get_last_checkpoint(training_args.output_dir)
if last_checkpoint is None and len(os.listdir(training_args.output_dir)) > 0:
raise ValueError(
f"Output directory ({training_args.output_dir}) already exists and is not empty. "
"Use --overwrite_output_dir to overcome."
)
elif last_checkpoint is not None and training_args._fr_fr_from_checkpoint is None:
logger.info(
f"Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change "
"the `--output_dir` or add `--overwrite_output_dir` to train from scratch."
)
# select and optionally sample the train_dataset
if training_args.do_train:
if "train" not in tokenized_datasets:
raise ValueError("--do_train requires a train dataset")
train_dataset = tokenized_datasets["train"]
if data_args.max_train_samples is not None:
train_dataset = train_dataset.select(range(data_args.max_train_samples))
# setup training parameters
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_datasets["train"],
tokenizer=tokenizer,
data_collator=data_collator,
)
# start training
if training_args.do_train:
checkpoint = None
if training_args.resume_from_checkpoint is not None:
checkpoint = training_args.resume_from_checkpoint
elif last_checkpoint is not None:
checkpoint = last_checkpoint
logger.info("*** Starting training ***")
train_result = trainer.train(resume_from_checkpoint=checkpoint)
trainer.save_model() # Saves the tokenizer too for easy upload
logger.info("*** Model saved ***")
metrics = train_result.metrics
max_train_samples = (
data_args.max_train_samples if data_args.max_train_samples is not None else len(train_dataset)
)
metrics["train_samples"] = min(max_train_samples, len(train_dataset))
trainer.log_metrics("train", metrics)
trainer.save_metrics("train", metrics)
trainer.save_state()
if __name__ == "__main__":
xmp.spawn(map_fn, args=(), nprocs=8, start_method='fork')
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12030/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12030/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12029 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12029/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12029/comments | https://api.github.com/repos/huggingface/transformers/issues/12029/events | https://github.com/huggingface/transformers/issues/12029 | 911,701,095 | MDU6SXNzdWU5MTE3MDEwOTU= | 12,029 | Seq2SeqTrainer: cannot set max length when we evaluate/(generate) during training | {
"login": "allanj",
"id": 3351187,
"node_id": "MDQ6VXNlcjMzNTExODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3351187?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/allanj",
"html_url": "https://github.com/allanj",
"followers_url": "https://api.github.com/users/allanj/followers",
"following_url": "https://api.github.com/users/allanj/following{/other_user}",
"gists_url": "https://api.github.com/users/allanj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/allanj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/allanj/subscriptions",
"organizations_url": "https://api.github.com/users/allanj/orgs",
"repos_url": "https://api.github.com/users/allanj/repos",
"events_url": "https://api.github.com/users/allanj/events{/privacy}",
"received_events_url": "https://api.github.com/users/allanj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The Seq2SeqTrainer also accepts the `max_length` argument in its [evaluate method](https://github.com/huggingface/transformers/blob/1f335aef3bb5382b5cfd7adbe5861ed4979dd98d/src/transformers/trainer_seq2seq.py#L41).",
"Yeah. But the `Seq2SeqTrainer` extends `Trainer`, which implements the actual`train` function.\r\n\r\nhttps://github.com/huggingface/transformers/blob/cbe63949d7/src/transformers/trainer.py#L924\r\nAnd it is \"fixed\" as in no argument will be passed in.",
"Ah yes, for this you need to set the parameters in the config then."
] | 1,622 | 1,622 | 1,622 | CONTRIBUTOR | null | ## Environment inf
- `transformers` version: 4.6.1
### Who can help
- trainer: @sgugger
## Information
Seq2SeqTrainer: cannot set max length when we evaluate/(generate) during training.
I know we can set the max length during the actual evaluation here: https://github.com/huggingface/transformers/blob/cbe63949d7/examples/seq2seq/finetune_trainer.py#L321
But if we want to set the max length during the evaluation in training:
https://github.com/huggingface/transformers/blob/cbe63949d7/src/transformers/trainer.py#L924
I can see, currently, there is nothing I can pass in.
I found the solution can be `model.config.max_length`, but is there a more explicit argument that I can pass in?
Also, if I want to get the logits as well during training, is there anything I can do here?
https://github.com/huggingface/transformers/blob/cbe63949d7/src/transformers/trainer.py#L1505
## Expected behavior
Set max length, and output logits, during the evaluation in training
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12029/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12029/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12028 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12028/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12028/comments | https://api.github.com/repos/huggingface/transformers/issues/12028/events | https://github.com/huggingface/transformers/pull/12028 | 911,700,413 | MDExOlB1bGxSZXF1ZXN0NjYxOTM3NjM1 | 12,028 | New TF GLUE example | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,622 | 1,623 | 1,623 | MEMBER | null | This is the PR for the new-style TF GLUE example. I'd like to run a few more tests, especially on the weirder datasets like MNLI and STSB, before I merge it, but it's almost ready! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12028/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12028/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12028",
"html_url": "https://github.com/huggingface/transformers/pull/12028",
"diff_url": "https://github.com/huggingface/transformers/pull/12028.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12028.patch",
"merged_at": 1623330877000
} |
https://api.github.com/repos/huggingface/transformers/issues/12027 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12027/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12027/comments | https://api.github.com/repos/huggingface/transformers/issues/12027/events | https://github.com/huggingface/transformers/pull/12027 | 911,687,195 | MDExOlB1bGxSZXF1ZXN0NjYxOTI2NjM2 | 12,027 | Replace legacy tensor.Tensor with torch.tensor/torch.empty | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,622 | 1,623 | 1,623 | CONTRIBUTOR | null | Motivated by [this issue](https://github.com/pytorch/pytorch/issues/53146) in the PyTorch repo. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12027/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12027/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12027",
"html_url": "https://github.com/huggingface/transformers/pull/12027",
"diff_url": "https://github.com/huggingface/transformers/pull/12027.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12027.patch",
"merged_at": 1623157119000
} |
https://api.github.com/repos/huggingface/transformers/issues/12026 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12026/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12026/comments | https://api.github.com/repos/huggingface/transformers/issues/12026/events | https://github.com/huggingface/transformers/pull/12026 | 911,484,385 | MDExOlB1bGxSZXF1ZXN0NjYxNzU0MDI5 | 12,026 | Fixes bug that appears when using QA bert and distilation. | {
"login": "madlag",
"id": 272253,
"node_id": "MDQ6VXNlcjI3MjI1Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/272253?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/madlag",
"html_url": "https://github.com/madlag",
"followers_url": "https://api.github.com/users/madlag/followers",
"following_url": "https://api.github.com/users/madlag/following{/other_user}",
"gists_url": "https://api.github.com/users/madlag/gists{/gist_id}",
"starred_url": "https://api.github.com/users/madlag/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/madlag/subscriptions",
"organizations_url": "https://api.github.com/users/madlag/orgs",
"repos_url": "https://api.github.com/users/madlag/repos",
"events_url": "https://api.github.com/users/madlag/events{/privacy}",
"received_events_url": "https://api.github.com/users/madlag/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It looks like I should also modify a bunch of other models (utils/check_copies.py fails) to match the same change I did in BERT, I will wait to hear from you what is the process to do so.\r\n",
"It should be ok now, only non related tests are failing in run_tests_torch.\r\n"
] | 1,622 | 1,623 | 1,623 | CONTRIBUTOR | null | This is a fix for
https://github.com/huggingface/transformers/issues/11626
and this is a somewhat related to:
https://github.com/huggingface/transformers/issues/11941
During backward pass Pytorch complains with:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
This happens because the QA model code modifies the start_positions and end_positions input tensors, using clamp_ function: as a consequence the teacher and the student both modifies the inputs, and backward pass fails.
From a quick check it looks like this is used in at least all QA code, like in:
```
cd transformers/src/transformers/models
grep -nr '[a-z]_(' . | grep clamp
...
./xlnet/modeling_xlnet.py:1877: start_positions.clamp_(0, ignored_index)
...
```
(and maybe for other models use.)
This may intended, but it's quite hard for the end user to track down these bugs as the issues reveal.
(And maybe PyTorch changed something and made this apparent)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12026/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12026/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12026",
"html_url": "https://github.com/huggingface/transformers/pull/12026",
"diff_url": "https://github.com/huggingface/transformers/pull/12026.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12026.patch",
"merged_at": 1623079320000
} |
https://api.github.com/repos/huggingface/transformers/issues/12025 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12025/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12025/comments | https://api.github.com/repos/huggingface/transformers/issues/12025/events | https://github.com/huggingface/transformers/pull/12025 | 911,217,654 | MDExOlB1bGxSZXF1ZXN0NjYxNTI2NTMx | 12,025 | Extend pipelines for automodel tupels | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"FYI, there are 536 model_ids (out of all public ones) that have no `config.architectures`. \r\nThey can be accessed here: https://github.com/patrickvonplaten/files_to_link_to/blob/master/model_ids_no_config.txt",
"@sgugger I think it should be good to go, but the failing test is not from this PR can you confirm ?",
"Yes the failing test is unrelated, a fix is on its way. This is safe to merge."
] | 1,622 | 1,623 | 1,623 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR allows for multiple `AutoModel...` classes to be attached to a single pipeline. It should unblock this PR: https://github.com/huggingface/transformers/pull/11525
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik for another set of eyes if possible.
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12025/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12025/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12025",
"html_url": "https://github.com/huggingface/transformers/pull/12025",
"diff_url": "https://github.com/huggingface/transformers/pull/12025.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12025.patch",
"merged_at": 1623080487000
} |
https://api.github.com/repos/huggingface/transformers/issues/12024 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12024/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12024/comments | https://api.github.com/repos/huggingface/transformers/issues/12024/events | https://github.com/huggingface/transformers/pull/12024 | 911,216,752 | MDExOlB1bGxSZXF1ZXN0NjYxNTI1Nzc2 | 12,024 | Add CANINE | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @NielsRogge , thanks for that PR! I'm currently trying to run the token classification example and here's some initial feedback:\r\n\r\n- AutoTokenizer is currently not working/finding the `CanineTokenizer`, so I had to manually specify it\r\n- It seems that the token classification example only works with Fast Tokenizers: I disabled that assertion check, but then the following message is thrown:\r\n\r\n``` File \"run_ner.py\", line 514, in <module>\r\n main()\r\n File \"run_ner.py\", line 361, in main\r\n load_from_cache_file=not data_args.overwrite_cache,\r\n File \"/opt/conda/lib/python3.6/site-packages/datasets/arrow_dataset.py\", line 1407, in map\r\n update_data = does_function_return_dict(test_inputs, test_indices)\r\n File \"/opt/conda/lib/python3.6/site-packages/datasets/arrow_dataset.py\", line 1378, in does_function_return_dict\r\n function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)\r\n File \"run_ner.py\", line 330, in tokenize_and_align_labels\r\n word_ids = tokenized_inputs.word_ids(batch_index=i)\r\n File \"/mnt/transformers-canince/src/transformers/tokenization_utils_base.py\", line 347, in word_ids\r\n raise ValueError(\"word_ids() is not available when using Python-based tokenizers\")\r\nValueError: word_ids() is not available when using Python-based tokenizers\r\n```\r\n\r\nis there any chance to get this example running in the final version of PR :thinking: would be highly interesting to see the results :hugs: ",
"@patrickvonplaten @sgugger @LysandreJik my PR is ready for review. \r\n\r\nJust a question: I might define a custom `CanineModelOutput`, as CANINE consists of 3 Transformer encoders (2 shallow ones, which only consists of a single layer, and one \"deep\" BERT-like). If a user specifies `output_hidden_states=True` for example, then it could return the hidden states of all of these 3 encoders (however, the hidden states won't have the same shape then). Currently, I'm just using a `BaseModelOutputWithPooling`, and it only returns the hidden states of the deep encoder (which all have the same shape). Would appreciate your feedback here. \r\n\r\nAlso @LysandreJik: would be great if you could review the tokenizer. Perhaps it would be better if a space is added after the CLS token and before the SEP token when decoding, e.g. \"hello world\" is currently decoded as \"[CLS]hello world[SEP]\". Do I need to update the `lstrip` and `rstrip` parameters of the `AddedToken` instances for that?\r\n\r\n\r\n",
"@NielsRogge just one question (I tried to use `CanineModel` with the latest commit):\r\n\r\n```python\r\nIn [36]: output[-1].shape\r\nOut[36]: torch.Size([1, 3, 768])\r\n\r\nIn [37]: encoding = tokenizer([\"hello world and hugging face\"], padding=\"longest\", return_tensors=\"pt\")\r\n\r\nIn [38]: hidden_states = model(**encoding).hidden_states\r\n\r\nIn [39]: hidden_states[-1].shape\r\nOut[39]: torch.Size([1, 7, 768])\r\n\r\nIn [40]: encoding = tokenizer([\"huggingface\"], padding=\"longest\", return_tensors=\"pt\")\r\n\r\nIn [41]: hidden_states = model(**encoding).hidden_states\r\n\r\nIn [42]: hidden_states[-1].shape\r\nOut[42]: torch.Size([1, 3, 768])\r\n```\r\n\r\nI don't really understand the shape of the hidden states from the model. I would expect one tensor per character, but why is it sometimes 3 or 7? ",
"Hi @stefan-it,\r\n\r\nThe reason is that CANINE downsamples the character sequence length before applying the deep Transformer encoder. The downsampling rate is by default set to 4, and the max sequence length (in terms of characters) is set to 2048. So `2048 // 4 = 512`, which is the regular length of models like BERT and RoBERTa. \r\n\r\nIn your case, you're not padding anything, so if you just provide `\"HuggingFace\"`, then the character sequence length is 13 (special tokens included), and `13 // 4 = 3`, hence the hidden states of the deep encoder have length 3. If you just want the final hidden states for each character (which are upsampled by another shallow Transformer encoder), you can use `outputs.last_hidden_state`. \r\n\r\nBut, and that's what my question is about above, I could, instead of only returning the `hidden states` of the deep encoder, also return the hidden states of the initial and final Transformer encoders. In that case, you will have `hidden_states` of the initial encoder at the character level, then `hidden_states` of the deep encoder at the downsampled level (the authors call this \"molecule level\"), and the `hidden_states` of the final encoder at the character level. ",
"Update: I'm working on a separate branch called `updating_outputs_canine` which replaces the `BaseModelOutputWithPooling` by a custom `CanineModelOutputWithPooling`, and it returns the attentions and hidden states of all 3 Transformer encoders. All tests passing :) I'll merge it with this main branch once I've got approval",
"Thanks a lot for your work @NielsRogge, a fantastic addition to the library once again!",
"Thanks! I haven't uploaded the model checkpoints yet, will do soon",
"Hi @NielsRogge , thanks for adding this! I'm current playing around with the model and found this corner case:\r\n\r\n```python\r\nfrom transformers import AutoConfig, CanineModel, CanineTokenizer\r\nmodel_name = \"google/canine-s\"\r\n\r\nconfig = AutoConfig.from_pretrained(model_name, output_hidden_states=True)\r\nmodel = CanineModel.from_pretrained(pretrained_model_name_or_path=model_name, config=config)\r\ntokenizer = CanineTokenizer.from_pretrained(model_name)\r\n\r\nencoding = tokenizer([\".\"], padding=\"longest\", return_tensors=\"pt\")\r\nhidden_states = model(**encoding).last_hidden_state\r\n```\r\n\r\nThis happens, when input has a length of 1:\r\n\r\n```bash\r\n---------------------------------------------------------------------------\r\nRuntimeError Traceback (most recent call last) \r\n<ipython-input-1-3719c2ae7d25> in <module> \r\n 7 \r\n 8 encoding = tokenizer([\".\"], padding=\"longest\", return_tensors=\"pt\")\r\n----> 9 hidden_states = model(**encoding).last_hidden_state \r\n \r\n/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)\r\n 1013 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks \r\n 1014 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1015 return forward_call(*input, **kwargs)\r\n 1016 # Do not call functions when jit is used\r\n 1017 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\n/mnt/europeana-bert/transformers/src/transformers/models/canine/modeling_canine.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, outpu\r\nt_attentions, output_hidden_states, return_dict)\r\n 1185 # this, it seems that molecules and characters require a very different\r\n 1186 # feature space; intuitively, this makes sense.\r\n-> 1187 init_molecule_encoding = self.chars_to_molecules(input_char_encoding)\r\n 1188\r\n 1189 # Deep BERT encoder\r\n\r\n/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)\r\n 1013 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\r\n 1014 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1015 return forward_call(*input, **kwargs)\r\n 1016 # Do not call functions when jit is used\r\n 1017 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\n/mnt/europeana-bert/transformers/src/transformers/models/canine/modeling_canine.py in forward(self, char_encoding)\r\n 324 # We transpose it to be [batch, hidden_size, char_seq]\r\n 325 char_encoding = torch.transpose(char_encoding, 1, 2)\r\n--> 326 downsampled = self.conv(char_encoding)\r\n 327 downsampled = torch.transpose(downsampled, 1, 2)\r\n 328 downsampled = self.activation(downsampled)\r\n\r\n/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)\r\n 1013 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\r\n 1014 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1015 return forward_call(*input, **kwargs)\r\n 1016 # Do not call functions when jit is used\r\n 1017 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\n/opt/conda/lib/python3.8/site-packages/torch/nn/modules/conv.py in forward(self, input)\r\n 261\r\n 262 def forward(self, input: Tensor) -> Tensor:\r\n--> 263 return self._conv_forward(input, self.weight, self.bias)\r\n 264\r\n 265\r\n\r\n/opt/conda/lib/python3.8/site-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight, bias)\r\n 257 weight, bias, self.stride,\r\n 258 _single(0), self.dilation, self.groups)\r\n--> 259 return F.conv1d(input, weight, bias, self.stride,\r\n 260 self.padding, self.dilation, self.groups)\r\n 261\r\n\r\nRuntimeError: Calculated padded input size per channel: (3). Kernel size: (4). Kernel size can't be greater than actual input size\r\n```\r\n\r\n(I have one NER dataset, and one sentence consists of one token, and that token is the `.` 😅)\r\n\r\nThanks for your patience :hugs: ",
"Yeah so the input has length 3 (3 unicode code points, namely for \"[CLS]\", \".\" and \"[SEP]\". However, CANINE uses a convolutional operation to downsample the sequence length, and the kernel size is 4. So of course you can't apply a kernel of size 4 to an input of size 3. So it's advised to pad the input up a size that's at least 4. "
] | 1,622 | 1,625 | 1,625 | CONTRIBUTOR | null | # What does this PR do?
It adds Google's new [CANINE](https://arxiv.org/abs/2103.06874) model. It's a tokenizer-free model, meaning you can throw away your `vocab.txt` file. The model trains at a character level, namely by turning each character into its unicode code point. In Python, this can be done using the built-in `ord()` function. This means that `input_ids` can be created simply as `[ord(char) for char in text]`. This is different from ByT5.
However, there's still a good use for a `CanineTokenizer`, namely for padding/truncating unicode code points up to the max length of 2048. It's also handy as it let's you easily convert text (string) to ids (unicode code points) and vice versa.
Due to the bigger sequence length (2048), the model downsamples the characters to what is called "molecules", of length 512. Then, a regular BERT encoder is applied. The `pooled_output` (which can be used for sequence classification) is then simply equal to the last hidden state of the first [CLS] token followed by a linear layer. In order to get a `sequence_output` (useful for token classification tasks) which is again of length 2048, an upsampling technique is used (details can be found in the paper).
To do:
- [x] remove `is_decoder` logic. Is it possible to add support for `is_decoder` in a future PR? Will this be backwards compatible? Answer: yes
- [x] fix tests (currently 35 pass, 1 fail). Once the official checkpoints are on the hub under the "Google" namespace, all tests pass.
- [ ] Update namespace on the hub from nielsr to google (update the tests for this too), and add model cards.
A question here: the CANINE model uses 3 Transformer encoders (2 shallow ones, consisting of only a single layer, of which the first one uses local attention, and a deep one similar to BERT). Should it be possible to return the `hidden_states` and `attentions` of all of these 3 encoders in the output? Or only of the deep one? Otherwise I need to define a custom `CanineModelOutput`.
Fixes #11016
cc @patil-suraj @patrickvonplaten
Also tagging one of the original authors: @dhgarrette | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12024/reactions",
"total_count": 5,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 5,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12024/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12024",
"html_url": "https://github.com/huggingface/transformers/pull/12024",
"diff_url": "https://github.com/huggingface/transformers/pull/12024.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12024.patch",
"merged_at": 1625054744000
} |
https://api.github.com/repos/huggingface/transformers/issues/12023 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12023/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12023/comments | https://api.github.com/repos/huggingface/transformers/issues/12023/events | https://github.com/huggingface/transformers/pull/12023 | 911,189,150 | MDExOlB1bGxSZXF1ZXN0NjYxNTAxOTI2 | 12,023 | Flax CLM script | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1936351150,
"node_id": "MDU6TGFiZWwxOTM2MzUxMTUw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Examples",
"name": "Examples",
"color": "d4c5f9",
"default": false,
"description": "Which is related to examples in general"
},
{
"id": 2934977194,
"node_id": "MDU6TGFiZWwyOTM0OTc3MTk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Flax",
"name": "Flax",
"color": "4862AD",
"default": false,
"description": ""
}
] | closed | false | null | [] | [] | 1,622 | 1,623 | 1,623 | MEMBER | null | This PR adds causal language model training script for flax.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12023/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12023/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12023",
"html_url": "https://github.com/huggingface/transformers/pull/12023",
"diff_url": "https://github.com/huggingface/transformers/pull/12023.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12023.patch",
"merged_at": 1623404780000
} |
https://api.github.com/repos/huggingface/transformers/issues/12022 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12022/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12022/comments | https://api.github.com/repos/huggingface/transformers/issues/12022/events | https://github.com/huggingface/transformers/issues/12022 | 911,140,738 | MDU6SXNzdWU5MTExNDA3Mzg= | 12,022 | Getting IndexError: index out of range in self while finetuning GPTNeo on Text Classification | {
"login": "bhadreshpsavani",
"id": 26653468,
"node_id": "MDQ6VXNlcjI2NjUzNDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhadreshpsavani",
"html_url": "https://github.com/bhadreshpsavani",
"followers_url": "https://api.github.com/users/bhadreshpsavani/followers",
"following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}",
"gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions",
"organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs",
"repos_url": "https://api.github.com/users/bhadreshpsavani/repos",
"events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This is because you are using a pretrained model but add new tokens without changing the weights. There is thus a problem when you pass that new token as input ID, it doesn't have a matching embedding. You can resize the embedding matrix but you will then lose all the weights of that embedding matrix, so you won't be properly applying transfer learning.",
"Thanks, @sgugger,\r\nIf I train the model with 1 batch size and no new padding token it works as you said that the exact reason.\r\nSo to add any new token in the tokenizer I need to train the model again for language modeling?",
"Not necessarily for language modeling, but you will need more training since all the embeddings will be randomly initialized. The only way I see around it is to manually create the embeddings weights to use the pretrained weights for all the tokens except the padding token and then put a line of 0s for the padding token. That should work. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,622 | 1,626 | 1,626 | CONTRIBUTOR | null | ## Environment info]
- `transformers` version: current master branch
- Platform: Colab
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu101
- Tensorflow version (GPU?): Not using
- Using GPU in script?: With GPU also has issue
- Using distributed or parallel set-up in script?: yes I am using trainer
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below) mention in below colab
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: emotion
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Run this [colab](https://github.com/bhadreshpsavani/ExploringSentimentalAnalysis/blob/main/SentimentalAnalysisWithGPTNeo.ipynb)
## Error:
```
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-13-5845bc74dd04> in <module>()
5 train_dataset=emotions_encoded["train"],
6 eval_dataset=emotions_encoded["validation"])
----> 7 trainer.train();
/usr/local/lib/python3.7/dist-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, **kwargs)
1261 tr_loss += self.training_step(model, inputs)
1262 else:
-> 1263 tr_loss += self.training_step(model, inputs)
1264 self.current_flos += float(self.floating_point_ops(inputs))
1265
/usr/local/lib/python3.7/dist-packages/transformers/trainer.py in training_step(self, model, inputs)
1739 loss = self.compute_loss(model, inputs)
1740 else:
-> 1741 loss = self.compute_loss(model, inputs)
1742
1743 if self.args.n_gpu > 1:
/usr/local/lib/python3.7/dist-packages/transformers/trainer.py in compute_loss(self, model, inputs, return_outputs)
1771 else:
1772 labels = None
-> 1773 outputs = model(**inputs)
1774 # Save past state if it exists
1775 # TODO: this needs to be fixed and made cleaner later.
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
/usr/local/lib/python3.7/dist-packages/transformers/models/gpt_neo/modeling_gpt_neo.py in forward(self, input_ids, past_key_values, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)
1098 output_attentions=output_attentions,
1099 output_hidden_states=output_hidden_states,
-> 1100 return_dict=return_dict,
1101 )
1102 hidden_states = transformer_outputs[0]
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
/usr/local/lib/python3.7/dist-packages/transformers/models/gpt_neo/modeling_gpt_neo.py in forward(self, input_ids, past_key_values, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict)
811
812 if inputs_embeds is None:
--> 813 inputs_embeds = self.wte(input_ids)
814 position_embeds = self.wpe(position_ids)
815 hidden_states = inputs_embeds + position_embeds
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/sparse.py in forward(self, input)
156 return F.embedding(
157 input, self.weight, self.padding_idx, self.max_norm,
--> 158 self.norm_type, self.scale_grad_by_freq, self.sparse)
159
160 def extra_repr(self) -> str:
/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1914 # remove once script supports set_grad_enabled
1915 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1916 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1917
1918
IndexError: index out of range in self
```
Note: Issue comes while using it with CPU. In case of GPU also Issue is coming but different error
### Who can help
@patil-suraj @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12022/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12022/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12021 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12021/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12021/comments | https://api.github.com/repos/huggingface/transformers/issues/12021/events | https://github.com/huggingface/transformers/pull/12021 | 911,013,542 | MDExOlB1bGxSZXF1ZXN0NjYxMzQ5MTI4 | 12,021 | [Deepspeed] Assert on mismatches between ds and hf args | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@stas00 - this looks great. Thanks for the added bonus of showing the conflicting names, values, and sources!",
"I'm glad to hear you found it useful, @rfernand2. \r\n\r\nThis is all new and experimental so if you find other suggestions for improvements please don't hesitate to file an issue. Thank you!"
] | 1,622 | 1,622 | 1,622 | CONTRIBUTOR | null | This is another iteration to make things less prone to errors when dealing with the complex space of partially overlapping DS and HF Trainer configs.
- validate params and assert on mismatch - revamps how the config massage is done
- add a test
- add docs
- this new code uncovered a config mismatch in the deepspeed tests thanks to the new validation, so fixed that too.
As example on a really bad mismatch (as in the new test), the user gets an assert with:
```
Please correct the following DeepSpeed config values that mismatch TrainingArguments values:
- ds train_micro_batch_size_per_gpu=4 vs hf per_device_train_batch_size=2
- ds gradient_accumulation_steps=4 vs hf gradient_accumulation_steps=2
- ds train_batch_size=1000 vs hf train_batch_size (calculated)=4
- ds gradient_clipping=1.1 vs hf max_grad_norm=1.0
- ds optimizer.params.betas=[0.8, 0.89] vs hf adam_beta1+adam_beta2=[0.9, 0.99]
- ds fp16.enabled=False vs hf fp16+fp16_backend(amp)=True
The easiest method is to set these DeepSpeed config values to 'auto'.
```
Fixes: https://github.com/microsoft/DeepSpeed/issues/1107
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12021/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12021/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12021",
"html_url": "https://github.com/huggingface/transformers/pull/12021",
"diff_url": "https://github.com/huggingface/transformers/pull/12021.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12021.patch",
"merged_at": 1622822304000
} |
https://api.github.com/repos/huggingface/transformers/issues/12020 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12020/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12020/comments | https://api.github.com/repos/huggingface/transformers/issues/12020/events | https://github.com/huggingface/transformers/issues/12020 | 910,991,849 | MDU6SXNzdWU5MTA5OTE4NDk= | 12,020 | Remove "Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation." | {
"login": "yananchen1989",
"id": 26405281,
"node_id": "MDQ6VXNlcjI2NDA1Mjgx",
"avatar_url": "https://avatars.githubusercontent.com/u/26405281?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yananchen1989",
"html_url": "https://github.com/yananchen1989",
"followers_url": "https://api.github.com/users/yananchen1989/followers",
"following_url": "https://api.github.com/users/yananchen1989/following{/other_user}",
"gists_url": "https://api.github.com/users/yananchen1989/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yananchen1989/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yananchen1989/subscriptions",
"organizations_url": "https://api.github.com/users/yananchen1989/orgs",
"repos_url": "https://api.github.com/users/yananchen1989/repos",
"events_url": "https://api.github.com/users/yananchen1989/events{/privacy}",
"received_events_url": "https://api.github.com/users/yananchen1989/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"+1, it seems like this method gets triggered on each call to open-ended generation, which pollutes logs quite a bit :/ Is there a way to have it run just once on `pipeline` creation?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"You can manually set the pad_token_id to prevent this error message before running the pipeline, e.g.\r\n\r\n```python\r\ngen_pipe = pipeline(\"text-generation\")\r\ngen_pipe.model.config.pad_token_id = gen_pipe.model.config.eos_token_id\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"@patrickvonplaten Hello, I have set\r\n\r\nmodel.config.pad_token_id = tokenizer.eos_token_id\r\n\r\nbut still got this warning, is it an error? How to disable it totally、"
] | 1,622 | 1,693 | 1,631 | NONE | null | When using GPT2 for generation, there is always info:
> Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation
I find this message hides in https://github.com/huggingface/transformers/blob/master/src/transformers/generation_utils.py
It seems redundancy. Can it be removed for simplicity ?
@patrickvonplaten @patil-suraj | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12020/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12020/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12019 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12019/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12019/comments | https://api.github.com/repos/huggingface/transformers/issues/12019/events | https://github.com/huggingface/transformers/issues/12019 | 910,979,172 | MDU6SXNzdWU5MTA5NzkxNzI= | 12,019 | Allow registerable Components | {
"login": "david-waterworth",
"id": 5028974,
"node_id": "MDQ6VXNlcjUwMjg5NzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5028974?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/david-waterworth",
"html_url": "https://github.com/david-waterworth",
"followers_url": "https://api.github.com/users/david-waterworth/followers",
"following_url": "https://api.github.com/users/david-waterworth/following{/other_user}",
"gists_url": "https://api.github.com/users/david-waterworth/gists{/gist_id}",
"starred_url": "https://api.github.com/users/david-waterworth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/david-waterworth/subscriptions",
"organizations_url": "https://api.github.com/users/david-waterworth/orgs",
"repos_url": "https://api.github.com/users/david-waterworth/repos",
"events_url": "https://api.github.com/users/david-waterworth/events{/privacy}",
"received_events_url": "https://api.github.com/users/david-waterworth/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Similar question to https://github.com/huggingface/transformers/issues/10256, and it is indeed a nice proposal!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,622 | 1,626 | 1,626 | NONE | null | # 🚀 Feature request
Currently, the Auto* classes use hardcoded mappings from model_name to class. This means they cannot be used to extend transformers with custom implementations, which in turn makes it difficult to perform experiments for example using alternate tokenisation particularly integration with other frameworks such as Allen NLP.
I have a custom tokeniser that is based on the tokenizers library. Since it uses a custom python pre-tokenisation step it cannot be serialised to the tokenizer.json file. To get around this I created a custom class that builds the pipeline on the fly. i.e.
```
class CustomUnigramTokenizer(PreTrainedTokenizerFast,FromParams):
def __init__(
self,
tokenizer_file:str=None,
bos_token:str="<s>",
eos_token:str="</s>",
sep_token:str="</s>",
cls_token:str="<s>",
unk_token:str="<unk>",
pad_token:str="<pad>",
mask_token:str="<mask>",
**kwargs
):
super().__init__(
tokenizer_file=tokenizer_file,
bos_token=bos_token,
eos_token=eos_token,
unk_token=unk_token,
sep_token=sep_token,
cls_token=cls_token,
pad_token=pad_token,
mask_token=mask_token,
**kwargs,
)
self.backend_tokenizer.pre_tokenizer = self._pre_tokenizer()
```
The issue is I have to use `CustomUnigramTokenizer.from_pretrained()` rather than `AutoTokenizer.from_pretrained()`
Most frameworks, i.e. the trainers in the transformers library, or the allennlp library use `AutoTokenizer.from_pretrained()`
So it would be nice if I could register my implementation, i.e.
AutoTokenizer.register(model_name="my_custom_model", CustomUnigramTokenizer)
Allowing AutoTokenizer.from_pretrained("/path_to_my_custom_tokeniser")
I'm assuming I have a config.json in /path_to_my_custom_tokeniser with model_name="my_custom_model"
Even better would be an allennlp registration system, i.e.
```
PreTrainedTokenizer.register("my_custom_model")
class CustomUnigramTokenizer(PreTrainedTokenizerFast,FromParams):
def __init__():
```
## Motivation
It's quite difficult to extend transformers and experiment with different configurations. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12019/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/12019/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12018 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12018/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12018/comments | https://api.github.com/repos/huggingface/transformers/issues/12018/events | https://github.com/huggingface/transformers/pull/12018 | 910,952,052 | MDExOlB1bGxSZXF1ZXN0NjYxMjk2ODgy | 12,018 | [TrainerArguments] format and sort __repr__, add __str__ | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,622 | 1,622 | 1,622 | CONTRIBUTOR | null | This PR:
- makes the `TrainerArguments` log dump more readable - by sorting and formatting the output
- `__repr__` wasn't actually used in examples, but `__str__` was needed - so fixing that buglet
New dump looks like:
```
06/03/2021 17:05:12 - INFO - __main__ - Training/evaluation parameters Seq2SeqTrainingArguments(
_n_gpu=1,
adafactor=False,
adam_beta1=0.9,
adam_beta2=0.999,
adam_epsilon=1e-06,
dataloader_drop_last=False,
dataloader_num_workers=0,
dataloader_pin_memory=True,
ddp_find_unused_parameters=None,
debug=[],
deepspeed=None,
disable_tqdm=False,
do_eval=True,
do_predict=False,
do_train=False,
eval_accumulation_steps=None,
eval_steps=2500,
evaluation_strategy=IntervalStrategy.STEPS,
fp16=False,
fp16_backend=auto,
fp16_full_eval=True,
fp16_opt_level=O1,
gradient_accumulation_steps=1,
greater_is_better=None,
group_by_length=False,
ignore_data_skip=False,
label_names=None,
label_smoothing_factor=0.1,
learning_rate=3e-05,
length_column_name=length,
load_best_model_at_end=False,
local_rank=-1,
log_on_each_node=True,
logging_dir=runs/Jun03_17-05-12_hope,
logging_first_step=True,
logging_steps=500,
logging_strategy=IntervalStrategy.STEPS,
lr_scheduler_type=SchedulerType.LINEAR,
max_grad_norm=1.0,
max_steps=-1,
metric_for_best_model=None,
mp_parameters=,
no_cuda=False,
num_train_epochs=1.0,
output_dir=output_dir,
overwrite_output_dir=True,
past_index=-1,
per_device_eval_batch_size=16,
per_device_train_batch_size=8,
predict_with_generate=True,
prediction_loss_only=False,
push_to_hub=False,
remove_unused_columns=True,
report_to=['tensorboard'],
resume_from_checkpoint=None,
run_name=output_dir,
save_steps=500,
save_strategy=IntervalStrategy.STEPS,
save_total_limit=None,
seed=42,
sharded_ddp=[],
skip_memory_metrics=True,
sortish_sampler=True,
tpu_metrics_debug=False,
tpu_num_cores=None,
use_legacy_prediction_loop=False,
warmup_ratio=0.0,
warmup_steps=50,
weight_decay=0.0,
)
```
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12018/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12018/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12018",
"html_url": "https://github.com/huggingface/transformers/pull/12018",
"diff_url": "https://github.com/huggingface/transformers/pull/12018.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12018.patch",
"merged_at": 1622824779000
} |
https://api.github.com/repos/huggingface/transformers/issues/12017 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12017/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12017/comments | https://api.github.com/repos/huggingface/transformers/issues/12017/events | https://github.com/huggingface/transformers/pull/12017 | 910,784,207 | MDExOlB1bGxSZXF1ZXN0NjYxMTU1MjEx | 12,017 | Fix deberta 2 Tokenizer Integration Test | {
"login": "PhilipMay",
"id": 229382,
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilipMay",
"html_url": "https://github.com/PhilipMay",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,622 | 1,623 | 1,623 | CONTRIBUTOR | null | fixes #12016 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12017/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12017/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12017",
"html_url": "https://github.com/huggingface/transformers/pull/12017",
"diff_url": "https://github.com/huggingface/transformers/pull/12017.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12017.patch",
"merged_at": 1623056155000
} |
https://api.github.com/repos/huggingface/transformers/issues/12016 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12016/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12016/comments | https://api.github.com/repos/huggingface/transformers/issues/12016/events | https://github.com/huggingface/transformers/issues/12016 | 910,775,693 | MDU6SXNzdWU5MTA3NzU2OTM= | 12,016 | FAILED tests/test_tokenization_deberta_v2.py::DebertaV2TokenizationTest::test_tokenizer_integration | {
"login": "PhilipMay",
"id": 229382,
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilipMay",
"html_url": "https://github.com/PhilipMay",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> I will provide a PR\r\n\r\nsee #12017 "
] | 1,622 | 1,623 | 1,623 | CONTRIBUTOR | null | This integration (`@slow`) test fails:
see https://github.com/huggingface/transformers/runs/2723622794?check_suite_focus=true#step:7:2663
I will provide a PR | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12016/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12016/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12015 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12015/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12015/comments | https://api.github.com/repos/huggingface/transformers/issues/12015/events | https://github.com/huggingface/transformers/pull/12015 | 910,687,800 | MDExOlB1bGxSZXF1ZXN0NjYxMDczNTMz | 12,015 | Fix problem_type to match with the applied loss function for distillbert sequence classification | {
"login": "sidhantls",
"id": 19412334,
"node_id": "MDQ6VXNlcjE5NDEyMzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/19412334?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sidhantls",
"html_url": "https://github.com/sidhantls",
"followers_url": "https://api.github.com/users/sidhantls/followers",
"following_url": "https://api.github.com/users/sidhantls/following{/other_user}",
"gists_url": "https://api.github.com/users/sidhantls/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sidhantls/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sidhantls/subscriptions",
"organizations_url": "https://api.github.com/users/sidhantls/orgs",
"repos_url": "https://api.github.com/users/sidhantls/repos",
"events_url": "https://api.github.com/users/sidhantls/events{/privacy}",
"received_events_url": "https://api.github.com/users/sidhantls/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @abhi1thakur @sgugger ",
"No, this is incorrect. What `single_label_classification` means each sample can only have one label (but there could be multiple classes) so the loss to use is cross entropy. `multi_label_classification` means each sample can have zero or several labels, so in this case we use bce (because there can't be a softmax)."
] | 1,622 | 1,623 | 1,623 | NONE | null | # What does this PR do?
The problem_type in config is not correct with the loss function applied. Can be seen [here](https://github.com/huggingface/transformers/blob/242ec31aa59b358e631d981b545fd08330584ea8/src/transformers/models/distilbert/modeling_distilbert.py#L649).
This PR fixes this so that the applied loss is consistent with the problem type: `BCEWithLogitsLoss` is applied for the `problem_type` of `single_label_classification`, and `CrossEntropyLoss` is applied for the problem type `multi_label_classification`
Fixes #12014
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12015/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12015/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12015",
"html_url": "https://github.com/huggingface/transformers/pull/12015",
"diff_url": "https://github.com/huggingface/transformers/pull/12015.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12015.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/12014 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12014/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12014/comments | https://api.github.com/repos/huggingface/transformers/issues/12014/events | https://github.com/huggingface/transformers/issues/12014 | 910,682,834 | MDU6SXNzdWU5MTA2ODI4MzQ= | 12,014 | MIssmatch between problem_type and loss functions in DistillBert for sequence classification | {
"login": "sidhantls",
"id": 19412334,
"node_id": "MDQ6VXNlcjE5NDEyMzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/19412334?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sidhantls",
"html_url": "https://github.com/sidhantls",
"followers_url": "https://api.github.com/users/sidhantls/followers",
"following_url": "https://api.github.com/users/sidhantls/following{/other_user}",
"gists_url": "https://api.github.com/users/sidhantls/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sidhantls/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sidhantls/subscriptions",
"organizations_url": "https://api.github.com/users/sidhantls/orgs",
"repos_url": "https://api.github.com/users/sidhantls/repos",
"events_url": "https://api.github.com/users/sidhantls/events{/privacy}",
"received_events_url": "https://api.github.com/users/sidhantls/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @abhi1thakur "
] | 1,622 | 1,623 | 1,623 | NONE | null | ## Environment info
- Platform: Windows
- Python version: 3.8
- PyTorch version (GPU?): 1.8
### Who can help
@LysandreJik
Models:
Distillbert for sequence classification
## Information
The problem_type in config is not correct with the loss function applied. Can be seen [here](https://github.com/huggingface/transformers/blob/242ec31aa59b358e631d981b545fd08330584ea8/src/transformers/models/distilbert/modeling_distilbert.py#L649)
Eg. for the problem type "single_label_classification", the CrossEntropyLoss is used, and the logic to detect "single_label_classification" indicates there should be multiple classes. So its just that the labels of problem_type are switched. Hence, if someone uses actually uses this config of probelm_type to choose their task (binary, or multiclass), the applied loss would be inccorect
## Expected behavior
For problem_type, the "single_label_classification", the BCE loss should be applied
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12014/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12014/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12013 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12013/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12013/comments | https://api.github.com/repos/huggingface/transformers/issues/12013/events | https://github.com/huggingface/transformers/pull/12013 | 910,598,040 | MDExOlB1bGxSZXF1ZXN0NjYwOTk1NDM1 | 12,013 | [Flax] Refactor MLM | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,622 | 1,622 | 1,622 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Simplify MLM pretraining script a bit
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12013/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12013/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12013",
"html_url": "https://github.com/huggingface/transformers/pull/12013",
"diff_url": "https://github.com/huggingface/transformers/pull/12013.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12013.patch",
"merged_at": 1622734292000
} |
https://api.github.com/repos/huggingface/transformers/issues/12012 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12012/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12012/comments | https://api.github.com/repos/huggingface/transformers/issues/12012/events | https://github.com/huggingface/transformers/issues/12012 | 910,504,991 | MDU6SXNzdWU5MTA1MDQ5OTE= | 12,012 | RAG end to end with RAY throws pickling error | {
"login": "sb1992",
"id": 10261100,
"node_id": "MDQ6VXNlcjEwMjYxMTAw",
"avatar_url": "https://avatars.githubusercontent.com/u/10261100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sb1992",
"html_url": "https://github.com/sb1992",
"followers_url": "https://api.github.com/users/sb1992/followers",
"following_url": "https://api.github.com/users/sb1992/following{/other_user}",
"gists_url": "https://api.github.com/users/sb1992/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sb1992/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sb1992/subscriptions",
"organizations_url": "https://api.github.com/users/sb1992/orgs",
"repos_url": "https://api.github.com/users/sb1992/repos",
"events_url": "https://api.github.com/users/sb1992/events{/privacy}",
"received_events_url": "https://api.github.com/users/sb1992/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Check this[ StackOverflow question](https://stackoverflow.com/questions/67798070/raytune-is-throwing-error-module-pickle-has-no-attribute-picklebuffer-whe). btw I use python 3.8 and it worked perfectly for me.",
"Yup i did but that is also just the question without any answer unfortunately.",
"Hi there, I executed the code again by installing RAY just to double-check. It is working perfectly for me. I also use an anaconda and RAY with 1.3.0. \r\n\r\nThe only difference is python 3.8 instead of 3.7. \r\n\r\nI would suggest you to update the anaconda and trying to run the script.",
"Alright thank you, i used python 3.8 now ( though my cuda is 10.1), That error looks resolved but something new pops up, Still trying on dummy data\r\n\r\n File \"finetune_rag.py\", line 790, in <module>\r\n main(args)\r\n File \"finetune_rag.py\", line 727, in main\r\n model: GenerativeQAModule = GenerativeQAModule(args)\r\n File \"finetune_rag.py\", line 131, in __init__\r\n retriever.set_ctx_encoder_tokenizer(ctx_encoder_tokenizer)\r\n AttributeError: 'RagRayDistributedRetriever' object has no attribute 'set_ctx_encoder_tokenizer'\r\n Exception ignored in: <function ActorHandle.__del__ at 0x7fc4c143aee0>\r\n Traceback (most recent call last):\r\n File \"/home/shraeyb/anaconda3/envs/py38/lib/python3.8/site-packages/ray/actor.py\", line 809, in __del__\r\n AttributeError: 'NoneType' object has no attribute 'global_worker'\r\n Exception ignored in: <function ActorHandle.__del__ at 0x7fc4c143aee0>\r\n Traceback (most recent call last):\r\n File \"/home/shraeyb/anaconda3/envs/py38/lib/python3.8/site-packages/ray/actor.py\", line 809, in __del__\r\n AttributeError: 'NoneType' object has no attribute 'global_worker'\r\n Exception ignored in: <function ActorHandle.__del__ at 0x7fc4c143aee0>\r\n Traceback (most recent call last):\r\n File \"/home/shraeyb/anaconda3/envs/py38/lib/python3.8/site-packages/ray/actor.py\", line 809, in __del__\r\n AttributeError: 'NoneType' object has no attribute 'global_worker'\r\n Exception ignored in: <function ActorHandle.__del__ at 0x7fc4c143aee0>\r\n Traceback (most recent call last):\r\n File \"/home/shraeyb/anaconda3/envs/py38/lib/python3.8/site-packages/ray/actor.py\", line 809, in __del__\r\n AttributeError: 'NoneType' object has no attribute 'global_worker'\r\n Stopped all 13 Ray processes.",
"Ah, I think you have to install the library from sources. Still, this is not in the pip version.",
"You mean ray 1.3 from source?",
"Tranformers library.\n\nOn Fri, Jun 4, 2021, 02:32 Shraey Bhatia ***@***.***> wrote:\n\n> You mean ray 1.3 from source?\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/12012#issuecomment-853916373>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AEA4FGRFOV4TWZSIZNP3IODTQ6HARANCNFSM46AZYZBQ>\n> .\n>\n",
"Thank you. This solves it. Indded needs to be python 3.8\r\nThough i have a few 8gb gpus so cannot really fit on my box, goes out of memory. What gpu config did you use?",
"Oh. I am working with 32Gb gpus. This has nearly 800 M parameters. Bart\nlarge model and two Bert base models.\n\nOn Fri, Jun 4, 2021, 02:58 Shraey Bhatia ***@***.***> wrote:\n\n> Thank you. This solves it. Indded needs to be python 3.8\n> Though i have a few 8gb gpus so cannot really fit on my box, goes out of\n> memory. What gpu config did you use?\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/12012#issuecomment-853935772>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AEA4FGS4TI7XJMAZAK3NOXDTQ6KCNANCNFSM46AZYZBQ>\n> .\n>\n",
"No worries. thats fine, I will have to submit jobs to a cluster ( is just tedious to do that). Thank you"
] | 1,622 | 1,622 | 1,622 | NONE | null | **Environmnet info:**
Transformers:4.5.1
Paltform:Ubuntu
Python:3.7
Torch 1.6.0
Gpus = yes
Distributed: Ray (1.3.0)
**Information**
Am using RAG end2end-retriever from examples code
examples/research_projects/rag-end2end-retriever
For the time being just trying on dummy data and just trying the script given there
sh ./test_run/test_finetune.sh
Used the config there except just changes number of gpus to 4 and changed the gpu order
**Who can help**
@shamanez
**Error**
Throws pickling error
Loading passages from test_run/dummy-kb/my_knowledge_dataset
Traceback (most recent call last):
File "finetune_rag.py", line 790, in <module>
main(args)
File "finetune_rag.py", line 727, in main
model: GenerativeQAModule = GenerativeQAModule(args)
File "finetune_rag.py", line 124, in __init__
hparams.model_name_or_path, hparams.actor_handles, config=config
File "/home/shraeyb/transformers/examples/research_projects/rag-end2end-retriever/distributed_ray_retriever.py", line 165, in from_pretrained
index=index,
File "/home/shraeyb/transformers/examples/research_projects/rag-end2end-retriever/distributed_ray_retriever.py", line 93, in __init__
for worker in self.retrieval_workers
File "/home/shraeyb/transformers/examples/research_projects/rag-end2end-retriever/distributed_ray_retriever.py", line 93, in <listcomp>
for worker in self.retrieval_workers
File "/home/shraeyb/anaconda3/envs/new_st/lib/python3.7/site-packages/ray/actor.py", line 112, in remote
return self._remote(args, kwargs)
File "/home/shraeyb/anaconda3/envs/new_st/lib/python3.7/site-packages/ray/actor.py", line 153, in _remote
return invocation(args, kwargs)
File "/home/shraeyb/anaconda3/envs/new_st/lib/python3.7/site-packages/ray/actor.py", line 147, in invocation
num_returns=num_returns)
File "/home/shraeyb/anaconda3/envs/new_st/lib/python3.7/site-packages/ray/actor.py", line 865, in _actor_method_call
list_args, name, num_returns, self._ray_actor_method_cpus)
File "python/ray/_raylet.pyx", line 1359, in ray._raylet.CoreWorker.submit_actor_task
File "python/ray/_raylet.pyx", line 1364, in ray._raylet.CoreWorker.submit_actor_task
File "python/ray/_raylet.pyx", line 304, in ray._raylet.prepare_args
File "/home/shraeyb/anaconda3/envs/new_st/lib/python3.7/site-packages/ray/serialization.py", line 324, in serialize
return self._serialize_to_msgpack(value)
File "/home/shraeyb/anaconda3/envs/new_st/lib/python3.7/site-packages/ray/serialization.py", line 304, in _serialize_to_msgpack
self._serialize_to_pickle5(metadata, python_objects)
File "/home/shraeyb/anaconda3/envs/new_st/lib/python3.7/site-packages/ray/serialization.py", line 264, in _serialize_to_pickle5
raise e
File "/home/shraeyb/anaconda3/envs/new_st/lib/python3.7/site-packages/ray/serialization.py", line 261, in _serialize_to_pickle5
value, protocol=5, buffer_callback=writer.buffer_callback)
File "/home/shraeyb/anaconda3/envs/new_st/lib/python3.7/site-packages/ray/cloudpickle/cloudpickle_fast.py", line 73, in dumps
cp.dump(obj)
File "/home/shraeyb/anaconda3/envs/new_st/lib/python3.7/site-packages/ray/cloudpickle/cloudpickle_fast.py", line 580, in dump
return Pickler.dump(self, obj)
File "pyarrow/io.pxi", line 1021, in pyarrow.lib.Buffer.__reduce_ex__
AttributeError: module 'pickle' has no attribute 'PickleBuffer'
INFO:wandb.internal.internal:Internal process exited
/home/shraeyb/anaconda3/envs/new_st/lib/python3.7/site-packages/ray/autoscaler/_private/cli_logger.py:61: FutureWarning: Not all Ray CLI dependencies were found. In Ray 1.4+, the Ray CLI, autoscaler, and dashboard will only be usable via `pip install 'ray[default]'`. Please update your install command.
"update your install command.", FutureWarning)
Stopped all 13 Ray processes. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12012/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12012/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12011 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12011/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12011/comments | https://api.github.com/repos/huggingface/transformers/issues/12011/events | https://github.com/huggingface/transformers/pull/12011 | 910,459,995 | MDExOlB1bGxSZXF1ZXN0NjYwODc2OTg5 | 12,011 | Add mlm pretraining xla torch readme | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,622 | 1,623 | 1,623 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adds stats about MLM pretraining
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12011/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12011/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12011",
"html_url": "https://github.com/huggingface/transformers/pull/12011",
"diff_url": "https://github.com/huggingface/transformers/pull/12011.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12011.patch",
"merged_at": 1623663081000
} |
https://api.github.com/repos/huggingface/transformers/issues/12010 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12010/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12010/comments | https://api.github.com/repos/huggingface/transformers/issues/12010/events | https://github.com/huggingface/transformers/issues/12010 | 910,455,398 | MDU6SXNzdWU5MTA0NTUzOTg= | 12,010 | Translation example generates the same input | {
"login": "puraminy",
"id": 5293185,
"node_id": "MDQ6VXNlcjUyOTMxODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5293185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/puraminy",
"html_url": "https://github.com/puraminy",
"followers_url": "https://api.github.com/users/puraminy/followers",
"following_url": "https://api.github.com/users/puraminy/following{/other_user}",
"gists_url": "https://api.github.com/users/puraminy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/puraminy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/puraminy/subscriptions",
"organizations_url": "https://api.github.com/users/puraminy/orgs",
"repos_url": "https://api.github.com/users/puraminy/repos",
"events_url": "https://api.github.com/users/puraminy/events{/privacy}",
"received_events_url": "https://api.github.com/users/puraminy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @puraminy,\r\n\r\nCould you provide a reproducible code snippet? Otherwise, it's very difficult to reproduce the error you got.",
"@patrickvonplaten \r\n\r\nHere is my code on github\r\n\r\nhttps://github.com/puraminy/mt5-comet\r\n\r\nyou should be able to run `run_trans` in the root folder. You can see my settings there. I also changed `run_translation` a bit to adopt it with my local data. \r\n\r\nThe data were also provided there. For test I decreased the number of train and test data in input parameters. I didn't check it recently, but I guess it's the last one with the error I specified above. \r\nThere is another seq2seq task, which follows run_translation, named run_comet. That task is for English 2 English. It also generates the input than target. \r\nHowever, for now you can first try `run_trans` and for example check the generated texts by `do_predict`\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,622 | 1,626 | 1,626 | NONE | null | I trained the [translation](https://github.com/huggingface/transformers/tree/master/examples/pytorch/translation ) example, using a custom json file with the format specefied in README.
It has a bug, I don't know where, but strangely it just generated the same input. For example if you set `--do_predict`, the `generated_predictions.txt` is some of (or in language or format) of inputs. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12010/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12010/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12009 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12009/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12009/comments | https://api.github.com/repos/huggingface/transformers/issues/12009/events | https://github.com/huggingface/transformers/issues/12009 | 910,397,477 | MDU6SXNzdWU5MTAzOTc0Nzc= | 12,009 | some issue in FlaxBertForMultipleChoice | {
"login": "thevasudevgupta",
"id": 53136577,
"node_id": "MDQ6VXNlcjUzMTM2NTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/53136577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thevasudevgupta",
"html_url": "https://github.com/thevasudevgupta",
"followers_url": "https://api.github.com/users/thevasudevgupta/followers",
"following_url": "https://api.github.com/users/thevasudevgupta/following{/other_user}",
"gists_url": "https://api.github.com/users/thevasudevgupta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thevasudevgupta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thevasudevgupta/subscriptions",
"organizations_url": "https://api.github.com/users/thevasudevgupta/orgs",
"repos_url": "https://api.github.com/users/thevasudevgupta/repos",
"events_url": "https://api.github.com/users/thevasudevgupta/events{/privacy}",
"received_events_url": "https://api.github.com/users/thevasudevgupta/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Ah I see! \r\n\r\nSo multiple choice is actually the only class where the `input_ids` have to be of shape `(batch_size, num_choices, seq_length)` instead of `(batch_size, seq_length)` (see [docs](https://huggingface.co/transformers/model_doc/bert.html?highlight=flaxbert#flaxbertformultiplechoice) here).\r\n\r\nThis means that the code example should be changed to:\r\n\r\n```python\r\nfrom transformers import BertConfig, FlaxBertForMultipleChoice\r\nimport numpy as np\r\nmodel = FlaxBertForMultipleChoice(BertConfig())\r\n\r\nmodel(np.ones((1, 1, 4)))\r\n```\r\nin order to work :-)",
"Okay. I missed that😅. Sorry for this simple issue. I will fix bigbird also accordingly. ",
"Absolutely no worries ;-) "
] | 1,622 | 1,622 | 1,622 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.7.0.dev0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.1 (False)
- Tensorflow version (GPU?): 2.5.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): Bert
## To reproduce
Steps to reproduce the behavior:
```python3
from transformers import BertConfig, FlaxBertForMultipleChoice
import numpy as np
model = FlaxBertForMultipleChoice(BertConfig())
model(np.ones((1, 4)))
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Output
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/vasudevgupta/Local/transformers/src/transformers/models/bert/modeling_flax_bert.py", line 600, in __call__
return self.module.apply(
File "/Users/vasudevgupta/miniconda3/lib/python3.8/site-packages/flax/linen/module.py", line 936, in apply
return apply(
File "/Users/vasudevgupta/miniconda3/lib/python3.8/site-packages/flax/core/scope.py", line 687, in wrapper
y = fn(root, *args, **kwargs)
File "/Users/vasudevgupta/miniconda3/lib/python3.8/site-packages/flax/linen/module.py", line 1178, in scope_fn
return fn(module.clone(parent=scope), *args, **kwargs)
File "/Users/vasudevgupta/miniconda3/lib/python3.8/site-packages/flax/linen/module.py", line 275, in wrapped_module_method
y = fun(self, *args, **kwargs)
File "/Users/vasudevgupta/Local/transformers/src/transformers/models/bert/modeling_flax_bert.py", line 1022, in __call__
reshaped_logits = logits.reshape(-1, num_choices)
File "/Users/vasudevgupta/miniconda3/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py", line 1322, in _reshape
newshape = _compute_newshape(a, args[0] if len(args) == 1 else args)
File "/Users/vasudevgupta/miniconda3/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py", line 1316, in _compute_newshape
return tuple(- core.divide_shape_sizes(np.shape(a), newshape)
File "/Users/vasudevgupta/miniconda3/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py", line 1316, in <genexpr>
return tuple(- core.divide_shape_sizes(np.shape(a), newshape)
File "/Users/vasudevgupta/miniconda3/lib/python3.8/site-packages/jax/core.py", line 1360, in divide_shape_sizes
return handler.divide_shape_sizes(ds[:len(s1)], ds[len(s1):])
File "/Users/vasudevgupta/miniconda3/lib/python3.8/site-packages/jax/core.py", line 1280, in divide_shape_sizes
raise InconclusiveDimensionOperation(f"Cannot divide evenly the sizes of shapes {tuple(s1)} and {tuple(s2)}")
jax.core.InconclusiveDimensionOperation: Cannot divide evenly the sizes of shapes (1, 1) and (-1, 4)
```
I am probably missing something here. Please help me to figure out the issue here.
<!-- A clear and concise description of what you would expect to happen. -->
@LysandreJik @patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12009/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12009/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12008 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12008/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12008/comments | https://api.github.com/repos/huggingface/transformers/issues/12008/events | https://github.com/huggingface/transformers/issues/12008 | 910,322,349 | MDU6SXNzdWU5MTAzMjIzNDk= | 12,008 | Issue while using DPR with tensorflow and py torch | {
"login": "MaheshChandrra",
"id": 13826929,
"node_id": "MDQ6VXNlcjEzODI2OTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/13826929?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MaheshChandrra",
"html_url": "https://github.com/MaheshChandrra",
"followers_url": "https://api.github.com/users/MaheshChandrra/followers",
"following_url": "https://api.github.com/users/MaheshChandrra/following{/other_user}",
"gists_url": "https://api.github.com/users/MaheshChandrra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MaheshChandrra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MaheshChandrra/subscriptions",
"organizations_url": "https://api.github.com/users/MaheshChandrra/orgs",
"repos_url": "https://api.github.com/users/MaheshChandrra/repos",
"events_url": "https://api.github.com/users/MaheshChandrra/events{/privacy}",
"received_events_url": "https://api.github.com/users/MaheshChandrra/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @MaheshChandrra, \r\n\r\nIt seems like there is a type as it should be called `outputs.start_logits` instead of `outputs.stat_logits`. Would you like to open a PR to fix it? :-)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,622 | 1,626 | 1,626 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform:Windows
- Python version:3.6
- PyTorch version (GPU?No):1.7.0
- Tensorflow version (GPU? No):2.3.1
- Using GPU in script?:No
- Using distributed or parallel set-up in script?:No
### Who can help
Hi @LysandreJik , @Rocketknight1 ,@patrickvonplaten, @lhoestq ,@sgugger
I'm trying the basic example mentioned in the DPR documentation but facing the below issue,
can you please help me,and can you please let me know using what functionality I can get the answer for a given question to the provided text, as `relevance_logits` is just giving a number which I'm not understanding the significance.
Thanks you all!!
```
#########################################################################
##Using **pt** as return tensors
from transformers import DPRReader, DPRReaderTokenizer
tokenizer = DPRReaderTokenizer.from_pretrained('facebook/dpr-reader-single-nq-base')
model = DPRReader.from_pretrained('facebook/dpr-reader-single-nq-base')
encoded_inputs = tokenizer(
questions=["What is love ?"],
titles=["Haddaway"],
texts=["'What Is Love' is a song recorded by the artist Haddaway"],
return_tensors='pt'
)
outputs = model(**encoded_inputs)
start_logits = outputs.stat_logits
end_logits = outputs.end_logits
relevance_logits = outputs.relevance_logits
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-1-86e12c26a869> in <module>
9 )
10 outputs = model(**encoded_inputs)
---> 11 start_logits = outputs.stat_logits
12 end_logits = outputs.end_logits
13 relevance_logits = outputs.relevance_logits
AttributeError: 'DPRReaderOutput' object has no attribute 'stat_logits'
#########################################################################
##Using tf as return_tensors
from transformers import DPRReader, DPRReaderTokenizer
tokenizer = DPRReaderTokenizer.from_pretrained('facebook/dpr-reader-single-nq-base')
model = DPRReader.from_pretrained('facebook/dpr-reader-single-nq-base')
encoded_inputs = tokenizer(
questions=["What is love ?"],
titles=["Haddaway"],
texts=["'What Is Love' is a song recorded by the artist Haddaway"],
return_tensors='tf'
)
outputs = model(**encoded_inputs)
start_logits = outputs.stat_logits
end_logits = outputs.end_logits
relevance_logits = outputs.relevance_logits
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-6-6f88caac8403> in <module>
8 return_tensors='tf'
9 )
---> 10 outputs = model(**encoded_inputs)
11 start_logits = outputs.stat_logits
12 end_logits = outputs.end_logits
~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
~\Anaconda3\lib\site-packages\transformers\models\dpr\modeling_dpr.py in forward(self, input_ids, attention_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict)
641 raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
642 elif input_ids is not None:
--> 643 input_shape = input_ids.size()
644 elif inputs_embeds is not None:
645 input_shape = inputs_embeds.size()[:-1]
AttributeError: 'tensorflow.python.framework.ops.EagerTensor' object has no attribute 'size'
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12008/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12008/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12007 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12007/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12007/comments | https://api.github.com/repos/huggingface/transformers/issues/12007/events | https://github.com/huggingface/transformers/pull/12007 | 910,310,883 | MDExOlB1bGxSZXF1ZXN0NjYwNzUyMjY3 | 12,007 | Fix megatron_gpt2 attention block's causal mask | {
"login": "novatig",
"id": 16716298,
"node_id": "MDQ6VXNlcjE2NzE2Mjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/16716298?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/novatig",
"html_url": "https://github.com/novatig",
"followers_url": "https://api.github.com/users/novatig/followers",
"following_url": "https://api.github.com/users/novatig/following{/other_user}",
"gists_url": "https://api.github.com/users/novatig/gists{/gist_id}",
"starred_url": "https://api.github.com/users/novatig/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/novatig/subscriptions",
"organizations_url": "https://api.github.com/users/novatig/orgs",
"repos_url": "https://api.github.com/users/novatig/repos",
"events_url": "https://api.github.com/users/novatig/events{/privacy}",
"received_events_url": "https://api.github.com/users/novatig/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, I am struggling with the same issue as I detailed in #12004\r\n\r\nDid you happen to confirm that once you make this change the two logits produced by Megatron GPT2 and transformers GPT2 are identical?\r\n\r\nFrom what I have experimented, making attention mask as lower triangular matrix did not produce the same logits, nor generated sensible sentences :(",
"Plus, although I did not check thorougly, [this line](https://github.com/huggingface/transformers/blob/61c506349134db0a0a2fd6fb2eff8e29a2f84e79/src/transformers/models/gpt2/modeling_gpt2.py#L132) seems to cancel out the wrong conversion, you might wanna take a look!",
"Could you share the generation snippet to reproduce the following result? \r\n```\r\nHow are you doing these days?\r\nI'm just trying to get through the day. I've been working on a lot of things, but it's hard when your kids come home and they're not here with me anymore because we don't have that connection like before.\" She said she has had some good times since her son was born in January 2012: \"It feels great being able for him [to] be around his dad again,\" he added as if remembering how much fun their relationship used too! The couple also share two daughters together – Ella Rose (born April 2013) who is now 10 years old; Aviana Grace Marie-Gracee DeSantis Jr., 5th grade daughter from an earlier marriage whom Lohan recently\r\n```",
"Hi,\r\n\r\nI ran the scripts that you provided in the issue, the logits with Megatron-LM and transformers are close relative to fp16 precision. There is of course the caveat that the embedding table in transformer's GPT2 has size 50257, while in Megatron-LM is 50304 (therefore we must apply [this](https://github.com/huggingface/transformers/blob/master/src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py#L73) step).\r\n(see attached code).\r\n\r\n> Plus, although I did not check thorougly, this line seems to cancel out the wrong conversion, you might wanna take a look!\r\n\r\nThat is initialization, which we overwrite with the checkpoint\r\n\r\n> Could you share the generation snippet to reproduce the following result?\r\n\r\nI used the code you provided in the issue!\r\n\r\n\r\n```\r\nimport sys\r\nsys.path.append('/project/Megatron-LM')\r\n\r\nimport torch\r\nfrom megatron import get_args, get_tokenizer, initialize_megatron, mpu\r\nfrom megatron.model import GPTModel\r\nfrom megatron.training import get_model\r\nfrom megatron.checkpointing import load_checkpoint\r\nfrom megatron.utils import get_ltor_masks_and_position_ids\r\nfrom transformers import GPT2LMHeadModel\r\nfrom tokenizers import ByteLevelBPETokenizer\r\n\r\n\r\ndef initialize():\r\n model_path = \"/project/megatron-gpt2-345m\"\r\n sys.argv.extend(\r\n [\r\n \"--num-layers\", \"24\",\r\n \"--hidden-size\", \"1024\",\r\n \"--num-attention-heads\", \"16\",\r\n \"--seq-length\", \"1024\",\r\n \"--max-position-embeddings\", \"1024\",\r\n \"--tokenizer-type\", \"GPT2BPETokenizer\",\r\n \"--fp16\",\r\n \"--load\", str(model_path),\r\n \"--vocab-file\", str(model_path + \"/vocab.json\"),\r\n \"--merge-file\", str(model_path + \"/merges.txt\"),\r\n \"--micro-batch-size\", \"1\",\r\n \"--checkpoint-activations\",\r\n \"--no-scaled-masked-softmax-fusion\",\r\n \"--no-load-rng\",\r\n \"--no-load-optim\"\r\n ]\r\n )\r\n initialize_megatron(ignore_unknown_args=True)\r\n\r\nif __name__ == \"__main__\":\r\n if mpu.is_unitialized():\r\n initialize()\r\n args = get_args()\r\n\r\n tokenizer = get_tokenizer().tokenizer #.tokenizer\r\n def model_provider(pre_process=True, post_process=True):\r\n return GPTModel(num_tokentypes=0, parallel_output=False,\r\n pre_process=True, post_process=True)\r\n model = get_model(model_provider)\r\n load_checkpoint(model=model, optimizer=None, lr_scheduler=None)\r\n model = model[0]\r\n model.eval()\r\n\r\n inputs = \"Hi, how are you doing?\"\r\n input_ids = torch.tensor(tokenizer.encode(inputs)).unsqueeze(0)\r\n attention_mask, _, position_ids = get_ltor_masks_and_position_ids(\r\n input_ids,\r\n 0,\r\n reset_position_ids=False,\r\n reset_attention_mask=False,\r\n eod_mask_loss=False,\r\n )\r\n input_ids = input_ids.cuda()\r\n position_ids = position_ids.cuda()\r\n attention_mask = attention_mask.cuda()\r\n logits = model(input_ids, position_ids, attention_mask)\r\n\r\n model2 = GPT2LMHeadModel.from_pretrained(\"/project/megatron-gpt2-345m/\")\r\n tok = ByteLevelBPETokenizer(\r\n \"/project/megatron-gpt2-345m/vocab.json\",\r\n \"/project/megatron-gpt2-345m/merges.txt\", unicode_normalizer=\"nfkc\")\r\n \r\n input_ids = torch.tensor(tok.encode(\"Hi, how are you doing?\").ids).unsqueeze(0).cuda()\r\n model = model2.cuda()\r\n model2.eval()\r\n out = model2(input_ids)\r\n\r\n for j in range(out.logits.shape[1]):\r\n for i in range(out.logits.shape[2]):\r\n a, b= out.logits[0,j,i].item(), logits[0,j,i].item()\r\n assert(abs(a-b) / max(max(abs(a),abs(b)), 0.5) < 0.1)\r\n```\r\n\r\n\r\n",
"@novatig Thanks for solving my issue!\r\nAs for the issue of @hwijeen, where he finds the split in self-attention(QKV and heads) of megatron and huggingface in different order, but you get the close answers. \r\nI think one possible reason is that there exists three versions of checkpoint in megatron, which needs transform between them, shown as https://github.com/NVIDIA/Megatron-LM/blob/42c1cf4279acea5a554500dcb552211f44cbec45/megatron/checkpointing.py#L209. \r\nSo the version of checkpoint should also be reported when talking about the wrong results.\r\nOr it will be better to modify the convert code for supporting different checkpoint versions.",
"Hi,\r\n\r\ncould you perhaps add a `test_modeling_megatron_gpt2.py` file to the tests folder? As this model is not a new model (only a conversion script is required), it could look very similar to [the one of BORT](https://github.com/huggingface/transformers/blob/master/tests/test_modeling_bort.py) (which is also just a conversion script to convert the weights to a BertModel). \r\n\r\nThe test should only include an integration test, which checks whether the HuggingFace model outputs the same output tensors (e.g. logits) on the same input data as the original implementation. \r\n\r\nThanks!",
"Hi,\r\n\r\nAs @codecaution pointed out, the problem in my case was that I was working with Megatron checkpoint version 3, whereas the proposed conversion code supports version 0 -- so please ignore the above comments of mine.\r\n",
"Hi,\r\n\r\nMany thanks @hwijeen and @codecaution for clarifying the issue!\r\nI updated the the PR, now the conversion script works also for checkpoints generated by recent versions of Megatron-LM.\r\n\r\n@NielsRogge @LysandreJik I added a brief integration test.\r\nFor simplicity, the reference outputs are not those from Megatron-LM but are from the HuggingFace model, and are close to the ones from Megatron-LM.",
"Thanks @novatig! \r\n\r\n> @NielsRogge @LysandreJik I added a brief integration test.\r\nFor simplicity, the reference outputs are not those from Megatron-LM but are from the HuggingFace model, and are close to the ones from Megatron-LM.\r\n\r\nFor reference, how close are these to the official Megatron GPT-2 output in terms of magnitude?",
"> Thanks @novatig!\r\n> \r\n> > @NielsRogge @LysandreJik I added a brief integration test.\r\n> > For simplicity, the reference outputs are not those from Megatron-LM but are from the HuggingFace model, and are close to the ones from Megatron-LM.\r\n> \r\n> For reference, how close are these to the official Megatron GPT-2 output in terms of magnitude?\r\n\r\nFor reference, the test values (taken from a \"diagonal\" of the returned logits tensor) are:\r\n```\r\nmegatron = [4.9492188, -0.2866211, -1.2041016, -4.0351562, -0.5180664, -5.2148438, -1.2412109, -1.8310547, -1.7675781, -4.71875, -0.23901367, -1.0761719, -2.1699219, 0.41235352, -3.8007812, -4.0585938, -2.5292969, -3.3808594, 4.3789062]\r\nhf-gpt2 = [4.9414, -0.2920, -1.2148, -4.0273, -0.5161, -5.2109, -1.2412, -1.8301, -1.7734, -4.7148, -0.2317, -1.0811, -2.1777, 0.4141, -3.7969, -4.0586, -2.5332, -3.3809, 4.3867]\r\n```\r\nThe absolute error is approximately <= 1e-2.\r\n\r\nI updated the PR with style changes.\r\nSorry for all the commits! I did not realize someone had to manually confirm approval for running the tests!",
"> Hi,\r\n> \r\n> Many thanks @hwijeen and @codecaution for clarifying the issue!\r\n> I updated the the PR, now the conversion script works also for checkpoints generated by recent versions of Megatron-LM.\r\n> \r\n> @NielsRogge @LysandreJik I added a brief integration test.\r\n> For simplicity, the reference outputs are not those from Megatron-LM but are from the HuggingFace model, and are close to the ones from Megatron-LM.\r\n\r\nHi! @novatig \r\nReally thanks for your effort! To make this PR better, I would like to give my own suggestion. \r\nAs Megatron-LM is a popular repo for training large transformer-based model, it will be better to take this into consideration in the convert code, including different mode size and model-parallel.",
"> Hi! @novatig\r\n> Really thanks for your effort! To make this PR better, I would like to give my own suggestion.\r\n> As Megatron-LM is a popular repo for training large transformer-based model, it will be better to take this into consideration in the convert code, including different mode size and model-parallel.\r\n\r\nYes, that's a good idea!\r\n@codecaution please review my last commit to check if it looks like what you were thinking about.\r\n\r\n",
"> > Hi! @novatig\r\n> > Really thanks for your effort! To make this PR better, I would like to give my own suggestion.\r\n> > As Megatron-LM is a popular repo for training large transformer-based model, it will be better to take this into consideration in the convert code, including different mode size and model-parallel.\r\n> \r\n> Yes, that's a good idea!\r\n> @codecaution please review my last commit to check if it looks like what you were thinking about.\r\n\r\nThanks! Good job!"
] | 1,622 | 1,623 | 1,623 | CONTRIBUTOR | null | # What does this PR do?
Fixes the conversion between Megatron-LM's GPT2 parameters and transformer's GPT2.
In Megatron-LM the attention mask is implemented differently and is not part of the provided checkpoint.
This PR initializes the mask as in [GPT2](https://github.com/huggingface/transformers/blob/master/src/transformers/models/gpt2/modeling_gpt2.py#L131).
Fixes [#11916](https://github.com/huggingface/transformers/issues/11916) and [#12004](https://github.com/huggingface/transformers/issues/12004).
Regarding [#11916](https://github.com/huggingface/transformers/issues/11916), this PR lowers the perplexity to 20 (cf. 30 for gpt2) without fine-tuning.
Regarding [#12004](https://github.com/huggingface/transformers/issues/12004), the prompt in the issue now is continued coherently.
> How are you doing these days?
> I'm just trying to get through the day. I've been working on a lot of things, but it's hard when your kids come home and they're not here with me anymore because we don't have that connection like before." She said she has had some good times since her son was born in January 2012: "It feels great being able for him [to] be around his dad again," he added as if remembering how much fun their relationship used too! The couple also share two daughters together – Ella Rose (born April 2013) who is now 10 years old; Aviana Grace Marie-Gracee DeSantis Jr., 5th grade daughter from an earlier marriage whom Lohan recently
## Who can review?
The PR for the megatron models was reviewed by @LysandreJik
@jdemouth | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12007/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12007/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12007",
"html_url": "https://github.com/huggingface/transformers/pull/12007",
"diff_url": "https://github.com/huggingface/transformers/pull/12007.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12007.patch",
"merged_at": 1623661076000
} |
https://api.github.com/repos/huggingface/transformers/issues/12006 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12006/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12006/comments | https://api.github.com/repos/huggingface/transformers/issues/12006/events | https://github.com/huggingface/transformers/issues/12006 | 910,306,518 | MDU6SXNzdWU5MTAzMDY1MTg= | 12,006 | Fluctuating embedding given by different random seed during inference | {
"login": "MatthewCYM",
"id": 40845677,
"node_id": "MDQ6VXNlcjQwODQ1Njc3",
"avatar_url": "https://avatars.githubusercontent.com/u/40845677?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MatthewCYM",
"html_url": "https://github.com/MatthewCYM",
"followers_url": "https://api.github.com/users/MatthewCYM/followers",
"following_url": "https://api.github.com/users/MatthewCYM/following{/other_user}",
"gists_url": "https://api.github.com/users/MatthewCYM/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MatthewCYM/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MatthewCYM/subscriptions",
"organizations_url": "https://api.github.com/users/MatthewCYM/orgs",
"repos_url": "https://api.github.com/users/MatthewCYM/repos",
"events_url": "https://api.github.com/users/MatthewCYM/events{/privacy}",
"received_events_url": "https://api.github.com/users/MatthewCYM/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,622 | 1,626 | 1,626 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.1
- Platform: Linux-5.4.0-65-generic-x86_64-with-glibc2.10
- Python version: 3.8.10
- PyTorch version (GPU?): 1.8.1 (GPU)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- bert: @LysandreJik
-->
## Information
Model I am using Bert:
The problem arises when using my own modified scripts:
Get pooler output from bert-base-uncased with different random seed
## Expected behavior
Fluctuating embedding given by different random seed during inference
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12006/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12006/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12005 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12005/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12005/comments | https://api.github.com/repos/huggingface/transformers/issues/12005/events | https://github.com/huggingface/transformers/issues/12005 | 910,303,381 | MDU6SXNzdWU5MTAzMDMzODE= | 12,005 | where is the code for DetrFeatureExtractor, DetrForObjectDetection | {
"login": "zhangbo2008",
"id": 35842504,
"node_id": "MDQ6VXNlcjM1ODQyNTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/35842504?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhangbo2008",
"html_url": "https://github.com/zhangbo2008",
"followers_url": "https://api.github.com/users/zhangbo2008/followers",
"following_url": "https://api.github.com/users/zhangbo2008/following{/other_user}",
"gists_url": "https://api.github.com/users/zhangbo2008/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhangbo2008/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhangbo2008/subscriptions",
"organizations_url": "https://api.github.com/users/zhangbo2008/orgs",
"repos_url": "https://api.github.com/users/zhangbo2008/repos",
"events_url": "https://api.github.com/users/zhangbo2008/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhangbo2008/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Haha it's not merged yet, thanks for your interest :) will be available soon",
"thanks dude",
"Resolved by #11653 . Code can be found [here](https://github.com/huggingface/transformers/blob/master/src/transformers/models/detr/modeling_detr.py)."
] | 1,622 | 1,623 | 1,623 | NONE | null | Hello my dear friend.
i am long for the model of https://huggingface.co/facebook/detr-resnet-50
i cannot find the code of it in transformers==4.7.0.dev0 and 4.6.1 pleae help me . appreciated.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12005/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12005/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12004 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12004/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12004/comments | https://api.github.com/repos/huggingface/transformers/issues/12004/events | https://github.com/huggingface/transformers/issues/12004 | 910,271,155 | MDU6SXNzdWU5MTAyNzExNTU= | 12,004 | Megatron GPT2 not compatible with transformers | {
"login": "hwijeen",
"id": 29157715,
"node_id": "MDQ6VXNlcjI5MTU3NzE1",
"avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hwijeen",
"html_url": "https://github.com/hwijeen",
"followers_url": "https://api.github.com/users/hwijeen/followers",
"following_url": "https://api.github.com/users/hwijeen/following{/other_user}",
"gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions",
"organizations_url": "https://api.github.com/users/hwijeen/orgs",
"repos_url": "https://api.github.com/users/hwijeen/repos",
"events_url": "https://api.github.com/users/hwijeen/events{/privacy}",
"received_events_url": "https://api.github.com/users/hwijeen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This is the code to check if the two gpt2s produce the same logits. Hope this helps and I would be happy to provide further information if requested.\r\n* megatron-LM\r\n```python\r\nimport torch\r\nfrom megatron import get_args, get_tokenizer, initialize_megatron, mpu\r\nfrom megatron.model import GPTModel\r\nfrom megatron.training import get_model\r\nfrom megatron.checkpointing import load_checkpoint\r\nfrom megatron.utils import get_ltor_masks_and_position_ids\r\n\r\ndef initialize():\r\n model_path = \"/workspace/my_gpt3_1.3B_mp1\"\r\n sys.argv.extend(\r\n [\r\n \"--distributed\",\r\n \"--distributed-backend\",\r\n \"nccl\",\r\n \"--fp16\",\r\n \"--load\", str(model_path),\r\n \"--vocab-file\", str(model_path / \"vocab.json\"),\r\n \"--merge-file\", str(model_path / \"merges.txt\"),\r\n \"--config-path\", str(model_path / \"deploy.json\"),\r\n \"--micro-batch-size\", \"1\",\r\n \"--no-scaled-masked-softmax-fusion\",\r\n \"--no-load-rng\",\r\n \"--no-load-optim\"\r\n ]\r\n )\r\n initialize_megatron(ignore_unknown_args=True)\r\n\r\nif __name__ == \"__main__\":\r\n if mpu.is_unitialized():\r\n initialize()\r\n args = get_args()\r\n\r\n tokenizer = get_tokenizer().tokenizer.tokenizer\r\n model = get_model(lambda: GPTModel(num_tokentypes=0, parallel_output=False))\r\n load_checkpoint(model=model, optimizer=None, lr_scheduler=None)\r\n model.eval()\r\n\r\n inputs = \"Hi, how are you doing?\"\r\n input_ids = torch.tensor(tokenizer.encode(inputs).ids).unsqueeze(0)\r\n attention_mask, _, position_ids = get_ltor_masks_and_position_ids(\r\n input_ids,\r\n 0,\r\n reset_position_ids=False,\r\n reset_attention_mask=False,\r\n eod_mask_loss=False,\r\n )\r\n input_ids = input_ids.cuda()\r\n position_ids = position_ids.cuda()\r\n attention_mask = attention_mask.cuda()\r\n logits = model(input_ids, position_ids, attention_mask)\r\n print(logits.norm())\r\n```\r\n\r\n* transformers\r\n```python\r\nfrom transformers_ import GPT2LMHeadModel\r\nfrom tokenizers import ByteLevelBPETokenizer\r\nimport torch\r\n\r\nmodel = GPT2LMHeadModel.from_pretrained(\".\")\r\ntok = ByteLevelBPETokenizer(\"vocab.json\", \"merges.txt\", unicode_normalizer=\"nfkc\")\r\n\r\ninput_ids = torch.tensor(tok.encode(\"Hi, how are you doing?\").ids).unsqueeze(0).cuda()\r\nmodel = model.cuda()\r\nmodel.eval()\r\nout = model(input_ids)\r\nprint(out.logits.norm())\r\n\r\n```",
"Hi, I also meet the problem about converting Megatron-LM to HuggingFace last week and I want to give my solution, which may help you.\r\n+ The convert code need to be changed as @navatig, where there should be a lower triangular part of the matrix [change](https://github.com/huggingface/transformers/pull/12007/files)\r\n+ Megatron-LM has four kinds of checkpoint versions, shown as [here](https://github.com/NVIDIA/Megatron-LM/blob/42c1cf4279acea5a554500dcb552211f44cbec45/megatron/checkpointing.py#L209). You have also mentioned the different split ways in your issue. So I suggest you to check the version of your own checkpoint, transform is into version 0 (which is the same as the convert script). This code shows how to transform version 0/1 to version 3, I think you can modify it as you need.",
"Thank you!\r\nThe checkpoint I was using was 3.0 and when I modified the attention matrix, it worked!",
"@codecaution Thanks a lot for your help! It was very useful! "
] | 1,622 | 1,623 | 1,623 | CONTRIBUTOR | null | **TLDR:**
Current implementation detail of attention in GPT2 is different from Megatron-LM GPT2, raising compatibility issue. Since the required change is quite minimal, how about changing transformers's implementation to follow Megatron-LM's?
**Quite detailed exploration:**
I used the [script](https://github.com/huggingface/transformers/blob/master/src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py) to convert Megatron-LM GPT2 into transformers GPT2.
The script itself works well, but the generation results with the converted checkpoint show that there is something wrong.
```python
from transformers import GPT2LMHeadModel
from transformers import GPT2Tokenizer
model = GPT2LMHeadModel.from_pretrained(".")
tok = GPT2Tokenizer.from_pretrained("gpt2")
input_ids = tok.encode("How are you doing these days?" , return_tensors="pt")
gen_tokens = model.generate(
input_ids,
max_length=150,
min_length=30,
do_samples=True,
num_beams=1,
temperature=0.9,
top_p=0.8,
top_k=0,
repetition_penalty=5.0,
)
print(tok.batch_decode(gen_tokens)[0])
# How are you doing these days?` `'' ''",'',",..., and '´','',", – ---''truvesabendabyaruobirockeysludboatmanlongsoceyeand how nobody anybody anything no way or what why not with all of theofoldways is by showing that if it's going to skip out onansonsarry hold skator who wonder when they're shown their a one two those well saying so there but behind them only ones don doubt wonders just like he his soon Johnny Louous ever after asking holding good say hey even though its still yet we know as long for now then may be in show off telling said terking something which will start next thing already says
```
I looked into the problem and found out that the logits produced by Megatron-LM GPT-2 and transformers GPT-2 are different.
Careful debugging revealed that it stems from the different way of implementing the attention mechanism.
```python
# Megatron-LM
# splits head first
225 # [sq, b, (np * 3 * hn)] --> [sq, b, np, 3 * hn]
226 new_tensor_shape = mixed_x_layer.size()[:-1] + \
227 (self.num_attention_heads_per_partition,
228 3 * self.hidden_size_per_attention_head)
229 mixed_x_layer = mixed_x_layer.view(*new_tensor_shape)
# and then splits key,query,value
230
231 # [sq, b, np, 3 * hn] --> 3 [sq, b, np, hn]
232 (query_layer,
233 key_layer,
234 value_layer) = mpu.split_tensor_along_last_dim(mixed_x_layer, 3)
# result
query_layer.norm() # tensor(110.7500, device='cuda:0', dtype=torch.float16, grad_fn=<CopyBackwards>)
key_layer.norm() # tensor(125.3125, device='cuda:0', dtype=torch.float16, grad_fn=<CopyBackwards>)
value_layer.norm() # tensor(42.9688, device='cuda:0', dtype=torch.float16, grad_fn=<CopyBackwards>)
```
```python
# transformers
# splits key, query, value first
242 query, key, value = self.c_attn(hidden_states).split(self.split_size, dim=2)
# and then splits head
244 query = self._split_heads(query, self.num_heads, self.head_dim)
245 key = self._split_heads(key, self.num_heads, self.head_dim)
246 value = self._split_heads(value, self.num_heads, self.head_dim)
# result
query.norm() # tensor(103.4170, device='cuda:0', grad_fn=<CopyBackwards>)
key.norm() # tensor(102.9513, device='cuda:0', grad_fn=<CopyBackwards>)
value.norm() # tensor(92.2508, device='cuda:0', grad_fn=<CopyBackwards>)
```
When I modified transformers GPT2 implementation to match Megatron-LM, the generation looks correct.
```python
# before
How are you doing these days?` `'' ''",'',",..., and '´','',", – ---''truvesabendabyaruobirockeysludboatmanlongsoceyeand ....
# after
How are you doing these days?, even further all," not a as to also 3 both every if [ always and ever ....
```
The generation results does not look super convincing, probably because it is not a very capable model (only ~300M parameters). When I experimented with my own LM checkpoint of 1.3B, the generation is only sensible when I made the modification.
Plus, I did a sanity check and confirmed that the logits from the Megatron-LM GPT-2 and transformers GPT-2 are the same only when I made the modification [here](https://github.com/huggingface/transformers/blob/61c506349134db0a0a2fd6fb2eff8e29a2f84e79/src/transformers/models/gpt2/modeling_gpt2.py#L242).
```python
# modify to follow Megatron-LM's implementation
else:
# query, key, value = self.c_attn(hidden_states).split(self.split_size, dim=2)
attn_out = self.c_attn(hidden_states)
attn_out = self._split_heads(attn_out, self.num_heads, self.head_dim * 3)
# query = self._split_heads(query, self.num_heads, self.head_dim)
# key = self._split_heads(key, self.num_heads, self.head_dim)
# value = self._split_heads(value, self.num_heads, self.head_dim)
query, key, value = attn_out.split(self.head_dim, 3)
```
How about merging this change into master branch? I would be happy to make a PR.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: Linux-4.18.0-25-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@LysandreJik @jdemouth
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): GPT2 (converted from Megatron-LM)
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Load [gpt-2 checkpoint](https://ngc.nvidia.com/catalog/models/nvidia:megatron_lm_345m) with Megatron-LM and do a forward pass.
2. Convert the checkpoint into transformers-compatible format using the [script](https://github.com/huggingface/transformers/blob/master/src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py), load the model with transformers and do the same forward pass.
3. The logits are not the same.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
1. Generation results of converted GPT2 are sensible.
2. Logits from Megatron-LM GPT2 and transformers GPT2 are the same. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12004/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/12004/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12003 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12003/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12003/comments | https://api.github.com/repos/huggingface/transformers/issues/12003/events | https://github.com/huggingface/transformers/issues/12003 | 910,165,279 | MDU6SXNzdWU5MTAxNjUyNzk= | 12,003 | Fast Tokenization fail for the pretrained model | {
"login": "jasonwu0731",
"id": 14951842,
"node_id": "MDQ6VXNlcjE0OTUxODQy",
"avatar_url": "https://avatars.githubusercontent.com/u/14951842?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jasonwu0731",
"html_url": "https://github.com/jasonwu0731",
"followers_url": "https://api.github.com/users/jasonwu0731/followers",
"following_url": "https://api.github.com/users/jasonwu0731/following{/other_user}",
"gists_url": "https://api.github.com/users/jasonwu0731/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jasonwu0731/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jasonwu0731/subscriptions",
"organizations_url": "https://api.github.com/users/jasonwu0731/orgs",
"repos_url": "https://api.github.com/users/jasonwu0731/repos",
"events_url": "https://api.github.com/users/jasonwu0731/events{/privacy}",
"received_events_url": "https://api.github.com/users/jasonwu0731/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,622 | 1,626 | 1,626 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.1
- Platform: Linux-5.4.89+-x86_64-with-debian-bullseye-sid
- Python version: 3.6.9
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
Library:
- pipelines: @LysandreJik
## Information
The model I am using is pipeline summarization:
## To reproduce
Steps to reproduce the behavior:
1. from transformers import pipeline
2. summarizer = pipeline("summarization", model="Salesforce/bart-large-xsum-samsum", device=0)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
Traceback (most recent call last):
File "run_pipeline.py", line 52, in <module>
summarizer = pipeline("summarization", model=args.model, device=0)
File "/opt/conda/lib/python3.6/site-packages/transformers/pipelines/__init__.py", line 388, in pipeline
tokenizer, revision=revision, use_fast=use_fast, _from_pipeline=task, **model_kwargs
File "/opt/conda/lib/python3.6/site-packages/transformers/models/auto/tokenization_auto.py", line 423, in from_pretrained
return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 1710, in from_pretrained
resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs
File "/opt/conda/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 1781, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/opt/conda/lib/python3.6/site-packages/transformers/models/roberta/tokenization_roberta_fast.py", line 173, in __init__
**kwargs,
File "/opt/conda/lib/python3.6/site-packages/transformers/models/gpt2/tokenization_gpt2_fast.py", line 145, in __init__
**kwargs,
File "/opt/conda/lib/python3.6/site-packages/transformers/tokenization_utils_fast.py", line 96, in __init__
fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file)
Exception: No such file or directory (os error 2)
```
## Expected behavior
I would expect the model can use fast tokenization as other models did.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12003/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12003/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12002 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12002/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12002/comments | https://api.github.com/repos/huggingface/transformers/issues/12002/events | https://github.com/huggingface/transformers/issues/12002 | 910,154,447 | MDU6SXNzdWU5MTAxNTQ0NDc= | 12,002 | ImportError: cannot import name 'MarianMTModel' | {
"login": "saivarshittha",
"id": 58642682,
"node_id": "MDQ6VXNlcjU4NjQyNjgy",
"avatar_url": "https://avatars.githubusercontent.com/u/58642682?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saivarshittha",
"html_url": "https://github.com/saivarshittha",
"followers_url": "https://api.github.com/users/saivarshittha/followers",
"following_url": "https://api.github.com/users/saivarshittha/following{/other_user}",
"gists_url": "https://api.github.com/users/saivarshittha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saivarshittha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saivarshittha/subscriptions",
"organizations_url": "https://api.github.com/users/saivarshittha/orgs",
"repos_url": "https://api.github.com/users/saivarshittha/repos",
"events_url": "https://api.github.com/users/saivarshittha/events{/privacy}",
"received_events_url": "https://api.github.com/users/saivarshittha/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,622 | 1,626 | 1,626 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.1
- Platform:jupyter notebook
- Python version: 3.6.7
- PyTorch version (GPU?):1.0.1.post2
- Tensorflow version (GPU?):
- Using GPU in script?:No
- Using distributed or parallel set-up in script?:No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12002/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12002/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12001 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12001/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12001/comments | https://api.github.com/repos/huggingface/transformers/issues/12001/events | https://github.com/huggingface/transformers/pull/12001 | 910,107,717 | MDExOlB1bGxSZXF1ZXN0NjYwNTc4MjMy | 12,001 | Update run_ner.py with id2label config | {
"login": "KoichiYasuoka",
"id": 15098598,
"node_id": "MDQ6VXNlcjE1MDk4NTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/15098598?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KoichiYasuoka",
"html_url": "https://github.com/KoichiYasuoka",
"followers_url": "https://api.github.com/users/KoichiYasuoka/followers",
"following_url": "https://api.github.com/users/KoichiYasuoka/following{/other_user}",
"gists_url": "https://api.github.com/users/KoichiYasuoka/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KoichiYasuoka/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KoichiYasuoka/subscriptions",
"organizations_url": "https://api.github.com/users/KoichiYasuoka/orgs",
"repos_url": "https://api.github.com/users/KoichiYasuoka/repos",
"events_url": "https://api.github.com/users/KoichiYasuoka/events{/privacy}",
"received_events_url": "https://api.github.com/users/KoichiYasuoka/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Sorry but I've rarely used `run_ner_no_trainer.py`. And, well, since `run_ner_no_trainer.py` does not change `config.json`, thus I want to keep this PR as is.",
"Will take care of it then, thanks!"
] | 1,622 | 1,623 | 1,623 | CONTRIBUTOR | null | # What does this PR do?
Enhancement for `run_ner.py` to produce more meaningful `id2label`.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12001/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12001/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12001",
"html_url": "https://github.com/huggingface/transformers/pull/12001",
"diff_url": "https://github.com/huggingface/transformers/pull/12001.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12001.patch",
"merged_at": 1623238026000
} |
https://api.github.com/repos/huggingface/transformers/issues/12000 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12000/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12000/comments | https://api.github.com/repos/huggingface/transformers/issues/12000/events | https://github.com/huggingface/transformers/issues/12000 | 910,100,365 | MDU6SXNzdWU5MTAxMDAzNjU= | 12,000 | Exporting the operator repeat_interleave to ONNX opset version (<=12) is not supported! | {
"login": "JaheimLee",
"id": 18062264,
"node_id": "MDQ6VXNlcjE4MDYyMjY0",
"avatar_url": "https://avatars.githubusercontent.com/u/18062264?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JaheimLee",
"html_url": "https://github.com/JaheimLee",
"followers_url": "https://api.github.com/users/JaheimLee/followers",
"following_url": "https://api.github.com/users/JaheimLee/following{/other_user}",
"gists_url": "https://api.github.com/users/JaheimLee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JaheimLee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JaheimLee/subscriptions",
"organizations_url": "https://api.github.com/users/JaheimLee/orgs",
"repos_url": "https://api.github.com/users/JaheimLee/repos",
"events_url": "https://api.github.com/users/JaheimLee/events{/privacy}",
"received_events_url": "https://api.github.com/users/JaheimLee/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I have a solution, but maybe it's not the best way.\r\n\r\nsin_pos = torch.zeros_like(sin).repeat(1, 1, 1, 2)\r\nsin_pos[..., ::2] = sin\r\nsin_pos[..., 1::2] = sin\r\ncos_pos = torch.zeros_like(cos).repeat(1, 1, 1, 2)\r\ncos_pos[..., ::2] = cos\r\ncos_pos[..., 1::2] = cos",
"It seems that you haven't change roformer's positional embedding codes yet. Demo codes are like:\r\n```python\r\nimport torch\r\nsinusoidal_pos = torch.randn(1,12,16,32)\r\nsin, cos = sinusoidal_pos.chunk(2, dim=-1)\r\n\r\nsin_pos = torch.repeat_interleave(sin, 2, dim=-1)\r\ncos_pos = torch.repeat_interleave(cos, 2, dim=-1)\r\n\r\nsin_pos_newway = torch.stack([sin,sin],axis=-1).reshape_as(sinusoidal_pos)\r\ncos_pos_newway = torch.stack([cos,cos],axis=-1).reshape_as(sinusoidal_pos)\r\n\r\nassert sin_pos.equal(sin_pos_newway)\r\nassert cos_pos.equal(cos_pos_newway)\r\n```"
] | 1,622 | 1,624 | 1,624 | NONE | null | When I export Roformer model to onnxruntime, this error occurred! Is there any ops can replace 'torch.repeat_interleave'?
It's in [https://github.com/huggingface/transformers/blob/master/src/transformers/models/roformer/modeling_roformer.py](url), at line 330 and 332 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12000/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12000/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11999 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11999/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11999/comments | https://api.github.com/repos/huggingface/transformers/issues/11999/events | https://github.com/huggingface/transformers/issues/11999 | 910,073,204 | MDU6SXNzdWU5MTAwNzMyMDQ= | 11,999 | Unable to find examples on using DPR for transfer learning,request you to provide examples | {
"login": "MaheshChandrra",
"id": 13826929,
"node_id": "MDQ6VXNlcjEzODI2OTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/13826929?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MaheshChandrra",
"html_url": "https://github.com/MaheshChandrra",
"followers_url": "https://api.github.com/users/MaheshChandrra/followers",
"following_url": "https://api.github.com/users/MaheshChandrra/following{/other_user}",
"gists_url": "https://api.github.com/users/MaheshChandrra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MaheshChandrra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MaheshChandrra/subscriptions",
"organizations_url": "https://api.github.com/users/MaheshChandrra/orgs",
"repos_url": "https://api.github.com/users/MaheshChandrra/repos",
"events_url": "https://api.github.com/users/MaheshChandrra/events{/privacy}",
"received_events_url": "https://api.github.com/users/MaheshChandrra/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi ! There are a few examples in the `datasets` documentation. It shows how to add a FAISS index to a dataset to perform dense retrieval using DPR:\r\nhttps://huggingface.co/docs/datasets/faiss_and_ea.html#adding-a-faiss-index",
"Thanks for the link,I tried using elasticsearch from my notebook and getting the below error,any help would be useful,Thanks.\r\n\r\n```\r\nfrom datasets import load_dataset\r\nsquad = load_dataset('squad', split='validation')\r\nsquad.add_elasticsearch_index(\"context\", host=\"localhost\", port=\"9200\")\r\n\r\n---------------------------------------------------------------------------\r\nConnectionRefusedError Traceback (most recent call last)\r\n~\\Anaconda3\\lib\\site-packages\\urllib3\\connection.py in _new_conn(self)\r\n 158 try:\r\n--> 159 conn = connection.create_connection(\r\n 160 (self._dns_host, self.port), self.timeout, **extra_kw\r\n\r\n~\\Anaconda3\\lib\\site-packages\\urllib3\\util\\connection.py in create_connection(address, timeout, source_address, socket_options)\r\n 83 if err is not None:\r\n---> 84 raise err\r\n 85 \r\n\r\n~\\Anaconda3\\lib\\site-packages\\urllib3\\util\\connection.py in create_connection(address, timeout, source_address, socket_options)\r\n 73 sock.bind(source_address)\r\n---> 74 sock.connect(sa)\r\n 75 return sock\r\n\r\nConnectionRefusedError: [WinError 10061] No connection could be made because the target machine actively refused it\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nNewConnectionError Traceback (most recent call last)\r\n~\\Anaconda3\\lib\\site-packages\\elasticsearch\\connection\\http_urllib3.py in perform_request(self, method, url, params, body, timeout, ignore, headers)\r\n 244 \r\n--> 245 response = self.pool.urlopen(\r\n 246 method, url, body, retries=Retry(False), headers=request_headers, **kw\r\n\r\n~\\Anaconda3\\lib\\site-packages\\urllib3\\connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)\r\n 723 \r\n--> 724 retries = retries.increment(\r\n 725 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]\r\n\r\n~\\Anaconda3\\lib\\site-packages\\urllib3\\util\\retry.py in increment(self, method, url, response, error, _pool, _stacktrace)\r\n 378 # Disabled, indicate to re-raise the error.\r\n--> 379 raise six.reraise(type(error), error, _stacktrace)\r\n 380 \r\n\r\n~\\Anaconda3\\lib\\site-packages\\urllib3\\packages\\six.py in reraise(tp, value, tb)\r\n 734 raise value.with_traceback(tb)\r\n--> 735 raise value\r\n 736 finally:\r\n\r\n~\\Anaconda3\\lib\\site-packages\\urllib3\\connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)\r\n 669 # Make the request on the httplib connection object.\r\n--> 670 httplib_response = self._make_request(\r\n 671 conn,\r\n\r\n~\\Anaconda3\\lib\\site-packages\\urllib3\\connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw)\r\n 391 else:\r\n--> 392 conn.request(method, url, **httplib_request_kw)\r\n 393 \r\n\r\n~\\Anaconda3\\lib\\http\\client.py in request(self, method, url, body, headers, encode_chunked)\r\n 1239 \"\"\"Send a complete request to the server.\"\"\"\r\n-> 1240 self._send_request(method, url, body, headers, encode_chunked)\r\n 1241 \r\n\r\n~\\Anaconda3\\lib\\http\\client.py in _send_request(self, method, url, body, headers, encode_chunked)\r\n 1285 body = _encode(body, 'body')\r\n-> 1286 self.endheaders(body, encode_chunked=encode_chunked)\r\n 1287 \r\n\r\n~\\Anaconda3\\lib\\http\\client.py in endheaders(self, message_body, encode_chunked)\r\n 1234 raise CannotSendHeader()\r\n-> 1235 self._send_output(message_body, encode_chunked=encode_chunked)\r\n 1236 \r\n\r\n~\\Anaconda3\\lib\\http\\client.py in _send_output(self, message_body, encode_chunked)\r\n 1005 del self._buffer[:]\r\n-> 1006 self.send(msg)\r\n 1007 \r\n\r\n~\\Anaconda3\\lib\\http\\client.py in send(self, data)\r\n 945 if self.auto_open:\r\n--> 946 self.connect()\r\n 947 else:\r\n\r\n~\\Anaconda3\\lib\\site-packages\\urllib3\\connection.py in connect(self)\r\n 186 def connect(self):\r\n--> 187 conn = self._new_conn()\r\n 188 self._prepare_conn(conn)\r\n\r\n~\\Anaconda3\\lib\\site-packages\\urllib3\\connection.py in _new_conn(self)\r\n 170 except SocketError as e:\r\n--> 171 raise NewConnectionError(\r\n 172 self, \"Failed to establish a new connection: %s\" % e\r\n\r\nNewConnectionError: <urllib3.connection.HTTPConnection object at 0x000002AF86A1A670>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nConnectionError Traceback (most recent call last)\r\n<ipython-input-2-91302caec2e8> in <module>\r\n 1 from datasets import load_dataset\r\n 2 squad = load_dataset('squad', split='validation')\r\n----> 3 squad.add_elasticsearch_index(\"context\", host=\"10.41.128.179\", port=\"8082\")\r\n\r\n~\\Anaconda3\\lib\\site-packages\\datasets\\arrow_dataset.py in add_elasticsearch_index(self, column, index_name, host, port, es_client, es_index_name, es_index_config)\r\n 3040 \"\"\"\r\n 3041 with self.formatted_as(type=None, columns=[column]):\r\n-> 3042 super().add_elasticsearch_index(\r\n 3043 column=column,\r\n 3044 index_name=index_name,\r\n\r\n~\\Anaconda3\\lib\\site-packages\\datasets\\search.py in add_elasticsearch_index(self, column, index_name, host, port, es_client, es_index_name, es_index_config)\r\n 539 host=host, port=port, es_client=es_client, es_index_name=es_index_name, es_index_config=es_index_config\r\n 540 )\r\n--> 541 es_index.add_documents(self, column=column)\r\n 542 self._indexes[index_name] = es_index\r\n 543 \r\n\r\n~\\Anaconda3\\lib\\site-packages\\datasets\\search.py in add_documents(self, documents, column)\r\n 140 index_name = self.es_index_name\r\n 141 index_config = self.es_index_config\r\n--> 142 self.es_client.indices.create(index=index_name, body=index_config)\r\n 143 number_of_docs = len(documents)\r\n 144 not_verbose = bool(logger.getEffectiveLevel() > WARNING)\r\n\r\n~\\Anaconda3\\lib\\site-packages\\elasticsearch\\client\\utils.py in _wrapped(*args, **kwargs)\r\n 150 if p in kwargs:\r\n 151 params[p] = kwargs.pop(p)\r\n--> 152 return func(*args, params=params, headers=headers, **kwargs)\r\n 153 \r\n 154 return _wrapped\r\n\r\n~\\Anaconda3\\lib\\site-packages\\elasticsearch\\client\\indices.py in create(self, index, body, params, headers)\r\n 121 raise ValueError(\"Empty value passed for a required argument 'index'.\")\r\n 122 \r\n--> 123 return self.transport.perform_request(\r\n 124 \"PUT\", _make_path(index), params=params, headers=headers, body=body\r\n 125 )\r\n\r\n~\\Anaconda3\\lib\\site-packages\\elasticsearch\\transport.py in perform_request(self, method, url, headers, params, body)\r\n 388 # raise exception on last retry\r\n 389 if attempt == self.max_retries:\r\n--> 390 raise e\r\n 391 else:\r\n 392 raise e\r\n\r\n~\\Anaconda3\\lib\\site-packages\\elasticsearch\\transport.py in perform_request(self, method, url, headers, params, body)\r\n 356 \r\n 357 try:\r\n--> 358 status, headers_response, data = connection.perform_request(\r\n 359 method,\r\n 360 url,\r\n\r\n~\\Anaconda3\\lib\\site-packages\\elasticsearch\\connection\\http_urllib3.py in perform_request(self, method, url, params, body, timeout, ignore, headers)\r\n 256 if isinstance(e, ReadTimeoutError):\r\n 257 raise ConnectionTimeout(\"TIMEOUT\", str(e), e)\r\n--> 258 raise ConnectionError(\"N/A\", str(e), e)\r\n 259 \r\n 260 # raise warnings if any from the 'Warnings' header.\r\n\r\nConnectionError: ConnectionError(<urllib3.connection.HTTPConnection object at 0x000002AF86A1A670>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it) caused by: NewConnectionError(<urllib3.connection.HTTPConnection object at 0x000002AF86A1A670>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it)\r\n```\r\n",
"Hi ! Did you start elasticsearch on your machine ? Could you also check that you used the right port ?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,622 | 1,626 | 1,626 | NONE | null | Hi Team Hugging face
Can you please provide a Q and A example for retrieval of an answer for a given text using DPR,I've read the documentation but couldn't find it. Would be of great help.
Thanks
Mahesh Mareedu
@lhoestq | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11999/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11999/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11998 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11998/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11998/comments | https://api.github.com/repos/huggingface/transformers/issues/11998/events | https://github.com/huggingface/transformers/issues/11998 | 910,040,357 | MDU6SXNzdWU5MTAwNDAzNTc= | 11,998 | Add SENet Blocks in Encoding Layers | {
"login": "calusbr",
"id": 25322394,
"node_id": "MDQ6VXNlcjI1MzIyMzk0",
"avatar_url": "https://avatars.githubusercontent.com/u/25322394?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/calusbr",
"html_url": "https://github.com/calusbr",
"followers_url": "https://api.github.com/users/calusbr/followers",
"following_url": "https://api.github.com/users/calusbr/following{/other_user}",
"gists_url": "https://api.github.com/users/calusbr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/calusbr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/calusbr/subscriptions",
"organizations_url": "https://api.github.com/users/calusbr/orgs",
"repos_url": "https://api.github.com/users/calusbr/repos",
"events_url": "https://api.github.com/users/calusbr/events{/privacy}",
"received_events_url": "https://api.github.com/users/calusbr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"Hey, I'd like to work on implementing this feature if it hasn't been done yet."
] | 1,622 | 1,630 | null | NONE | null | # 🚀 Feature Request
I read the article "[SesameBERT: Attention for Anywhere](https://arxiv.org/pdf/1910.03176.pdf)" and would like to add SENet blocks in the Huggingface implementation. The article's authors made an implementation with [Tensorflow](https://github.com/ICLR2020Sesame/SesameBert/blob/master/modeling.py), but I would like to use the lib in pytorch.
## Motivation
The use of ([Squeeze-and-Excitation Networks](https://arxiv.org/abs/1709.01507)) SENet Blocks has obtained state-of-the-art results. And they seem to be promising in NLP.
## Your contribution
I know that it is possible to modify the [[BertLayer()](https://github.com/huggingface/transformers/blob/61c506349134db0a0a2fd6fb2eff8e29a2f84e79/src/transformers/models/bert/modeling_bert.py#L430)] and [[BertEnconder()](https://github.com/huggingface/transformers/blob/61c506349134db0a0a2fd6fb2eff8e29a2f84e79/src/transformers/models/bert/modeling_bert.py#L513)] classes
Any suggestions on how to modify the code so that you can apply the idea used in the article?

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11998/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11998/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11997 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11997/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11997/comments | https://api.github.com/repos/huggingface/transformers/issues/11997/events | https://github.com/huggingface/transformers/pull/11997 | 909,760,681 | MDExOlB1bGxSZXF1ZXN0NjYwMjg2MTI0 | 11,997 | [deepspeed] add nvme test skip rule | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2659267025,
"node_id": "MDU6TGFiZWwyNjU5MjY3MDI1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed",
"name": "DeepSpeed",
"color": "4D34F7",
"default": false,
"description": ""
}
] | closed | false | null | [] | [] | 1,622 | 1,622 | 1,622 | CONTRIBUTOR | null | As discussed at https://github.com/microsoft/DeepSpeed/issues/1126 make it possible to skip the nvme test if user's system isn't compatible with libaio requirements.
@sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11997/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11997/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11997",
"html_url": "https://github.com/huggingface/transformers/pull/11997",
"diff_url": "https://github.com/huggingface/transformers/pull/11997.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11997.patch",
"merged_at": 1622660798000
} |
https://api.github.com/repos/huggingface/transformers/issues/11996 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11996/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11996/comments | https://api.github.com/repos/huggingface/transformers/issues/11996/events | https://github.com/huggingface/transformers/issues/11996 | 909,627,786 | MDU6SXNzdWU5MDk2Mjc3ODY= | 11,996 | None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used. | {
"login": "MK096",
"id": 20142735,
"node_id": "MDQ6VXNlcjIwMTQyNzM1",
"avatar_url": "https://avatars.githubusercontent.com/u/20142735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MK096",
"html_url": "https://github.com/MK096",
"followers_url": "https://api.github.com/users/MK096/followers",
"following_url": "https://api.github.com/users/MK096/following{/other_user}",
"gists_url": "https://api.github.com/users/MK096/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MK096/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MK096/subscriptions",
"organizations_url": "https://api.github.com/users/MK096/orgs",
"repos_url": "https://api.github.com/users/MK096/repos",
"events_url": "https://api.github.com/users/MK096/events{/privacy}",
"received_events_url": "https://api.github.com/users/MK096/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,622 | 1,626 | 1,626 | NONE | null | While generating exe using pyinstaller, i get following error. In python IDL it works finw
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
My TF and Pytorch versions are 2.5.0, 1.8.1+cpu respectively.
I tried uninstalling them and then reinstalling them along with transformers
Any help will be appreciated | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11996/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11996/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11995 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11995/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11995/comments | https://api.github.com/repos/huggingface/transformers/issues/11995/events | https://github.com/huggingface/transformers/issues/11995 | 909,627,039 | MDU6SXNzdWU5MDk2MjcwMzk= | 11,995 | tensorflow has no attribute swish | {
"login": "FrancoisMentec",
"id": 22057576,
"node_id": "MDQ6VXNlcjIyMDU3NTc2",
"avatar_url": "https://avatars.githubusercontent.com/u/22057576?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FrancoisMentec",
"html_url": "https://github.com/FrancoisMentec",
"followers_url": "https://api.github.com/users/FrancoisMentec/followers",
"following_url": "https://api.github.com/users/FrancoisMentec/following{/other_user}",
"gists_url": "https://api.github.com/users/FrancoisMentec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FrancoisMentec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FrancoisMentec/subscriptions",
"organizations_url": "https://api.github.com/users/FrancoisMentec/orgs",
"repos_url": "https://api.github.com/users/FrancoisMentec/repos",
"events_url": "https://api.github.com/users/FrancoisMentec/events{/privacy}",
"received_events_url": "https://api.github.com/users/FrancoisMentec/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This is very strange - I can't reproduce it here, and Tensorflow >= 2.3 should have Swish as an activation. I'll try to figure that one out, but if you discover anything else about the problem, let me know!",
"Tensorflow version is wrong:\r\n```\r\nimport tensorflow as ts\r\n\r\nprint(ts.__version__)\r\n```\r\ngive:\r\n`2.1.0`\r\n\r\nI have no idea why pip and transformers-cli show a different version. I must state that I hate Microsoft Azure. The error isn't on your side.\r\n\r\nEDIT:\r\nFor anyone who get the same error with Azure, you need to select Python 3.8 in the kernel version on your notebook, Python 3 is selected by default.\r\n\r\n",
"Don't worry about it! This is something I see with conda or venvs sometimes - usually it's because I'm accidentally using two Python environments at once - for example, if I don't have pip installed in a conda environment the pip command still works, but it's actually the system pip, and if I'm not paying attention I just end up installing packages systemwide, while still being unable to access them in my active environment."
] | 1,622 | 1,622 | 1,622 | NONE | null | ## Environment info
- transformers version: 4.6.1
- Platform: Linux-5.4.0-1047-azure-x86_64-with-glibc2.10
- Python version: 3.8.1
- PyTorch version (GPU?): 1.8.1 (True)
- Tensorflow version (GPU?): 2.5.0 (False)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
```
azureuser@ai4hr-k80:~/cloudfiles/code/Users/francois.mentec$ pip show tensorflow
Name: tensorflow
Version: 2.5.0
Summary: TensorFlow is an open source machine learning framework for everyone.
Home-page: https://www.tensorflow.org/
Author: Google Inc.
Author-email: [email protected]
License: Apache 2.0
Location: /anaconda/envs/azureml_py38/lib/python3.8/site-packages
Requires: typing-extensions, h5py, grpcio, tensorflow-estimator, wrapt, gast, tensorboard, six, keras-nightly, astunparse, flatbuffers, wheel, absl-py, protobuf, numpy, opt-einsum, keras-preprocessing, termcolor, google-pasta
Required-by:
azureuser@ai4hr-k80:~/cloudfiles/code/Users/francois.mentec$ pip show transformers
Name: transformers
Version: 4.6.1
Summary: State-of-the-art Natural Language Processing for TensorFlow 2.0 and PyTorch
Home-page: https://github.com/huggingface/transformers
Author: Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Sam Shleifer, Patrick von Platen, Sylvain Gugger, Suraj Patil, Stas Bekman, Google AI Language Team Authors, Open AI team Authors, Facebook AI Authors, Carnegie Mellon University Authors
Author-email: [email protected]
License: Apache
Location: /anaconda/envs/azureml_py38/lib/python3.8/site-packages
Requires: numpy, filelock, requests, tqdm, sacremoses, packaging, tokenizers, huggingface-hub, regex
Required-by:
```
### Who can help
- tensorflow: @Rocketknight1
## To reproduce
Steps to reproduce the behavior:
import transformers:
`from transformers import BertTokenizer, BertModel, AdamW`
## Expected behavior
Following error:
`AttributeError: module 'tensorflow_core.python.keras.api._v2.keras.activations' has no attribute 'swish'` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11995/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11995/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11994 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11994/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11994/comments | https://api.github.com/repos/huggingface/transformers/issues/11994/events | https://github.com/huggingface/transformers/pull/11994 | 909,599,614 | MDExOlB1bGxSZXF1ZXN0NjYwMTUwNDUz | 11,994 | CLIPFeatureExtractor should resize images with kept aspect ratio | {
"login": "TobiasNorlund",
"id": 2678217,
"node_id": "MDQ6VXNlcjI2NzgyMTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/2678217?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TobiasNorlund",
"html_url": "https://github.com/TobiasNorlund",
"followers_url": "https://api.github.com/users/TobiasNorlund/followers",
"following_url": "https://api.github.com/users/TobiasNorlund/following{/other_user}",
"gists_url": "https://api.github.com/users/TobiasNorlund/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TobiasNorlund/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TobiasNorlund/subscriptions",
"organizations_url": "https://api.github.com/users/TobiasNorlund/orgs",
"repos_url": "https://api.github.com/users/TobiasNorlund/repos",
"events_url": "https://api.github.com/users/TobiasNorlund/events{/privacy}",
"received_events_url": "https://api.github.com/users/TobiasNorlund/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @TobiasNorlund , thanks a lot for spotting this.\r\n\r\nHowever, this change does not actually work as expected. I processed a few images using the origin CLIP transforms and `CLIPFeatureExtractor` and compared the output. Here's the script\r\n\r\n```python3\r\nfrom PIL import Image\r\nimport os\r\nimport skimage\r\n\r\nimport torch\r\nfrom clip import load\r\n\r\nfrom transformers import CLIPConfig, CLIPModel, CLIPTokenizer, CLIPFeatureExtractor, CLIPProcessor\r\n\r\n_, transforms = load(\"./model.pt\", jit=False)\r\n\r\nproc = CLIPProcessor.from_pretrained(\"./clip-vit-base-patch32/\")\r\n\r\nfiles = [filename for filename in os.listdir(skimage.data_dir) if filename.endswith(\".png\") or filename.endswith(\".jpg\")]\r\n\r\nimages = []\r\nfor filename in files:\r\n image = transforms(Image.open(os.path.join(skimage.data_dir, filename)).convert(\"RGB\"))\r\n images.append(image)\r\n\r\nhf_images = []\r\nfor filename in files:\r\n image = Image.open(os.path.join(skimage.data_dir, filename)).convert(\"RGB\")\r\n enc = proc(images=image, return_tensors=\"pt\")\r\n hf_images.append(enc.pixel_values.squeeze(0))\r\n\r\nmatch = [torch.allclose(hf_image, pt_image, atol=4e-2) for hf_image, pt_image in zip(hf_images, images)]\r\nall(match)\r\n```\r\n\r\nI looked into `torchvision' s` `resize` and `center_crop` implementation and turns out they are a bit different than the way we have implemented it. Following their implem I tried overriding the `resize` and `center_crop` methods in `CLIPFeatureExtractor` which seems to be working. Here's the code \r\n \r\n```python\r\ndef center_crop(self, image, size):\r\n \"\"\"\r\n Crops :obj:`image` to the given size using a center crop. Note that if the image is too small to be cropped to\r\n the size is given, it will be padded (so the returned result has the size asked).\r\n\r\n Args:\r\n image (:obj:`PIL.Image.Image` or :obj:`np.ndarray` or :obj:`torch.Tensor`):\r\n The image to resize.\r\n size (:obj:`int` or :obj:`Tuple[int, int]`):\r\n The size to which crop the image.\r\n \"\"\"\r\n self._ensure_format_supported(image)\r\n if not isinstance(size, tuple):\r\n size = (size, size)\r\n\r\n if not isinstance(image, Image.Image):\r\n image = self.to_pil_image(image)\r\n\r\n image_width, image_height = image.size\r\n crop_height, crop_width = size\r\n\r\n crop_top = int((image_height - crop_height + 1) * 0.5)\r\n crop_left = int((image_width - crop_width + 1) * 0.5)\r\n\r\n return image.crop((crop_left, crop_top, crop_left + crop_width, crop_top + crop_height))\r\n\r\ndef resize(self, image, size, resample=Image.BICUBIC):\r\n width, height = image.size\r\n\r\n short, long = (width, height) if width <= height else (height, width)\r\n if short == size:\r\n return image\r\n\r\n new_short, new_long = size, int(size * long / short)\r\n\r\n new_w, new_h = (new_short, new_long) if width <= height else (new_long, new_short)\r\n return image.resize((new_w, new_h), resample)\r\n```\r\nWith this change, there is no need to change the `__call__` method.\r\nHowever, this change requires `center_crop` to be always applied as `resize` won't always resize to an exact given size. \r\n\r\n\r\nCould you verify this on your end and update the PR? Thanks!",
"Thanks @patil-suraj !\r\nPlease have a look at the updated PR, in which I modified your `resize` method to also support non-PIL images to make tests pass."
] | 1,622 | 1,623 | 1,623 | CONTRIBUTOR | null | # What does this PR do?
Fixes #11992
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patil-suraj
@LysandreJik
@patrickvonplaten
@sgugger
## Description
With this PR, the preprocessing should match the original preprocessing exactly. With the example from #11992 we now get the exact same results:
```python
>>> import torch
>>> import requests
>>> from transformers import CLIPProcessor, CLIPModel
>>> model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
>>> processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> inputs = processor(images=image, return_tensors="pt", padding=True)
>>> inputs
{'pixel_values': tensor([[[[ 0.5873, 0.5873, 0.6165, ..., 0.0617, 0.0471, -0.0259],
[ 0.5727, 0.5727, 0.6603, ..., 0.1201, 0.0763, 0.0909],
[ 0.5873, 0.5435, 0.6165, ..., 0.0325, 0.1201, 0.0617],
...,
[ 1.8719, 1.8573, 1.8719, ..., 1.3902, 1.4340, 1.4194],
[ 1.8281, 1.8719, 1.8427, ..., 1.4486, 1.4340, 1.5070],
[ 1.8573, 1.9011, 1.8281, ..., 1.3756, 1.3610, 1.4486]],
[[-1.3169, -1.3019, -1.3169, ..., -1.4970, -1.4369, -1.4820],
[-1.2418, -1.2718, -1.2268, ..., -1.4369, -1.4669, -1.4519],
[-1.2568, -1.3169, -1.2268, ..., -1.4669, -1.4069, -1.4519],
...,
[ 0.1239, 0.1089, 0.1239, ..., -0.7016, -0.6865, -0.6865],
[ 0.0789, 0.0939, 0.0488, ..., -0.6565, -0.6865, -0.6115],
[ 0.0939, 0.1089, 0.0038, ..., -0.7766, -0.7316, -0.6115]],
[[-0.4848, -0.4137, -0.3853, ..., -0.9541, -0.8545, -0.8545],
[-0.4137, -0.4706, -0.3711, ..., -0.8119, -0.8545, -0.7834],
[-0.3284, -0.4422, -0.3853, ..., -0.8688, -0.8119, -0.8830],
...,
[ 1.5771, 1.6482, 1.6340, ..., 0.9088, 0.9514, 0.8945],
[ 1.6198, 1.6055, 1.6055, ..., 0.8661, 0.8092, 0.7950],
[ 1.6624, 1.6766, 1.5487, ..., 0.7950, 0.8661, 0.8519]]]])}
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11994/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11994/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11994",
"html_url": "https://github.com/huggingface/transformers/pull/11994",
"diff_url": "https://github.com/huggingface/transformers/pull/11994.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11994.patch",
"merged_at": 1623330641000
} |
https://api.github.com/repos/huggingface/transformers/issues/11993 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11993/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11993/comments | https://api.github.com/repos/huggingface/transformers/issues/11993/events | https://github.com/huggingface/transformers/issues/11993 | 909,591,812 | MDU6SXNzdWU5MDk1OTE4MTI= | 11,993 | XLNET on SQuAD2 evaluation error | {
"login": "yifding",
"id": 24882423,
"node_id": "MDQ6VXNlcjI0ODgyNDIz",
"avatar_url": "https://avatars.githubusercontent.com/u/24882423?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yifding",
"html_url": "https://github.com/yifding",
"followers_url": "https://api.github.com/users/yifding/followers",
"following_url": "https://api.github.com/users/yifding/following{/other_user}",
"gists_url": "https://api.github.com/users/yifding/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yifding/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yifding/subscriptions",
"organizations_url": "https://api.github.com/users/yifding/orgs",
"repos_url": "https://api.github.com/users/yifding/repos",
"events_url": "https://api.github.com/users/yifding/events{/privacy}",
"received_events_url": "https://api.github.com/users/yifding/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It looks like you are using DataParallel for the evaluation, which does not always work if the number of samples is not a round multiple of the number of GPUs. You should use DistributedDataParallel (as recommended by the PyTorch team), just change your command to:\r\n```\r\npython -m torch.distributed.launch --nproc_per_node 2 run_qa_beam_search.py \\\r\n``` \r\n(and replace 2 by your actual number of GPUs) and it should work (it did on my setup).",
"> It looks like you are using DataParallel for the evaluation, which does not always work if the number of samples is not a round multiple of the number of GPUs. You should use DistributedDataParallel (as recommended by the PyTorch team), just change your command to:\r\n> \r\n> ```\r\n> python -m torch.distributed.launch --nproc_per_node 2 run_qa_beam_search.py \\\r\n> ```\r\n> \r\n> (and replace 2 by your actual number of GPUs) and it should work (it did on my setup).\r\n\r\nThank you so much, close the issue now."
] | 1,622 | 1,623 | 1,623 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.7.0.dev0
- Platform: NAME="Red Hat Enterprise Linux Server" VERSION="7.9 (Maipo)"
- Python version: Python 3.6.8 :: Anaconda, Inc.
- PyTorch version (GPU?): 1.8.1
- Tensorflow version (GPU?): not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: parallel in 4 GPUs on one node
### Who can help
@patrickvonplaten @sgugger
- Models: XLNET, benchmarks on SQuAD2
## Information
Model I am using (Bert, XLNet ...): XLNET cased-base, it produces error like following in the evaluation step:
```
100%|█████████▉| 1528/1529 [09:59<00:00, 2.55it/s]Traceback (most recent call last):
File "/scratch365/yding4/Amazon_2021_summer_intern/AVEQA_PyTorch/XLNET_on_SQuAD2.0/run_qa_beam_search.py", line 661, in <module>
main()
File "/scratch365/yding4/Amazon_2021_summer_intern/AVEQA_PyTorch/XLNET_on_SQuAD2.0/run_qa_beam_search.py", line 620, in main
metrics = trainer.evaluate()
File "/scratch365/yding4/Amazon_2021_summer_intern/AVEQA_PyTorch/XLNET_on_SQuAD2.0/trainer_qa.py", line 50, in evaluate
ignore_keys=ignore_keys,
File "/scratch365/yding4/Amazon_2021_summer_intern/transformers/src/transformers/trainer.py", line 2169, in evaluation_loop
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
File "/scratch365/yding4/Amazon_2021_summer_intern/transformers/src/transformers/trainer.py", line 2383, in prediction_step
outputs = model(**inputs)
File "/afs/crc.nd.edu/user/y/yding4/.conda/envs/AVEQA_PyTorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/afs/crc.nd.edu/user/y/yding4/.conda/envs/AVEQA_PyTorch/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 168, in forward
return self.gather(outputs, self.output_device)
File "/afs/crc.nd.edu/user/y/yding4/.conda/envs/AVEQA_PyTorch/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 180, in gather
return gather(outputs, output_device, dim=self.dim)
File "/afs/crc.nd.edu/user/y/yding4/.conda/envs/AVEQA_PyTorch/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 76, in gather
res = gather_map(outputs)
File "/afs/crc.nd.edu/user/y/yding4/.conda/envs/AVEQA_PyTorch/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 70, in gather_map
for k in out))
File "<string>", line 11, in __init__
File "/scratch365/yding4/Amazon_2021_summer_intern/transformers/src/transformers/file_utils.py", line 1739, in __post_init__
for element in iterator:
File "/afs/crc.nd.edu/user/y/yding4/.conda/envs/AVEQA_PyTorch/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 70, in <genexpr>
for k in out))
File "/afs/crc.nd.edu/user/y/yding4/.conda/envs/AVEQA_PyTorch/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 71, in gather_map
return type(out)(map(gather_map, zip(*outputs)))
File "/afs/crc.nd.edu/user/y/yding4/.conda/envs/AVEQA_PyTorch/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 63, in gather_map
return Gather.apply(target_device, dim, *outputs)
File "/afs/crc.nd.edu/user/y/yding4/.conda/envs/AVEQA_PyTorch/lib/python3.6/site-packages/torch/nn/parallel/_functions.py", line 72, in forward
return comm.gather(inputs, ctx.dim, ctx.target_device)
File "/afs/crc.nd.edu/user/y/yding4/.conda/envs/AVEQA_PyTorch/lib/python3.6/site-packages/torch/nn/parallel/comm.py", line 235, in gather
return torch._C._gather(tensors, dim, destination)
RuntimeError: Input tensor at index 3 has invalid shape [384, 1, 1024], but expected [384, 2, 1024]
```
The problem arises when using:
- [ * ] the official example scripts: (give details below)
```
python run_qa_beam_search.py \
--model_name_or_path xlnet-large-cased \
--dataset_name squad_v2 \
--do_train \
--do_eval \
--version_2_with_negative \
--learning_rate 3e-5 \
--num_train_epochs 1 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir ./wwm_cased_finetuned_squad/ \
--per_device_eval_batch_size=2 \
--per_device_train_batch_size=2 \
--save_steps 5000
```
The evaluation works when the **--per_device_eval_batch_size=2** is set to 1.
The tasks I am working on is:
- [x] an official GLUE/SQUaD task: SQUaD2.0 task
the script is used from https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa_beam_search.py
## To reproduce
Steps to reproduce the behavior:
1. just run the script
## Expected behavior
train successfully and get evaluation results | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11993/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11993/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11992 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11992/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11992/comments | https://api.github.com/repos/huggingface/transformers/issues/11992/events | https://github.com/huggingface/transformers/issues/11992 | 909,572,948 | MDU6SXNzdWU5MDk1NzI5NDg= | 11,992 | CLIPFeatureExtractor should resize images with kept aspect ratio | {
"login": "TobiasNorlund",
"id": 2678217,
"node_id": "MDQ6VXNlcjI2NzgyMTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/2678217?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TobiasNorlund",
"html_url": "https://github.com/TobiasNorlund",
"followers_url": "https://api.github.com/users/TobiasNorlund/followers",
"following_url": "https://api.github.com/users/TobiasNorlund/following{/other_user}",
"gists_url": "https://api.github.com/users/TobiasNorlund/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TobiasNorlund/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TobiasNorlund/subscriptions",
"organizations_url": "https://api.github.com/users/TobiasNorlund/orgs",
"repos_url": "https://api.github.com/users/TobiasNorlund/repos",
"events_url": "https://api.github.com/users/TobiasNorlund/events{/privacy}",
"received_events_url": "https://api.github.com/users/TobiasNorlund/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"Specifically, I believe this is due to differing logic in the image resizing preprocessing step. The original CLIP implementation uses `torchvision.transforms.Resize` ([ref](https://github.com/openai/CLIP/blob/main/clip/clip.py#L60)) to resize the image given a single integer of the desired size. According to the [documentation of `Resize`](https://pytorch.org/vision/stable/transforms.html#torchvision.transforms.Resize):\r\n\r\n> If size is an int, smaller edge of the image will be matched to this number. i.e, if height > width, then image will be rescaled to (size * height / width, size)\r\n\r\nHowever, in transformers, the image is eventually [resized in `CLIPFeatureExtractor`](https://github.com/huggingface/transformers/blob/master/src/transformers/models/clip/feature_extraction_clip.py#L146), which in turn [resizes to a square size](https://github.com/huggingface/transformers/blob/123b597f5da6dd1e54545f9cce1450dc4b401784/src/transformers/image_utils.py#L158)."
] | 1,622 | 1,623 | 1,623 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.6.1
- Platform: Linux-5.11.0-7614-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.8.1+cu102 (False)
- Tensorflow version (GPU?): 2.5.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patil-suraj
@LysandreJik
@patrickvonplaten
@sgugger
## Information
Model I am using (Bert, XLNet ...): CLIP
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
The CLIPFeatureExtractor does not replicate the behavior of the [CLIP reference implementation](https://github.com/openai/CLIP). The below code is taken from the official [huggingface transformers CLIP documentation](https://huggingface.co/transformers/model_doc/clip.html):
```
$ docker run --rm -it huggingface/transformers-cpu:4.6.1
root@02cd404c4a60:/workspace# pip install Pillow==7.2.0
root@02cd404c4a60:/workspace# python3
Python 3.6.9 (default, Jan 26 2021, 15:33:00)
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> from PIL import Image
>>> import requests
>>> from transformers import CLIPProcessor, CLIPModel
>>> model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
>>> processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> inputs = processor(images=image, return_tensors="pt", padding=True)
>>> inputs
{'pixel_values': tensor([[[[ 0.2807, 0.3829, 0.4267, ..., -0.2886, -0.2740, -0.2886],
[ 0.3245, 0.3829, 0.4121, ..., -0.2886, -0.2886, -0.3178],
[ 0.2807, 0.3537, 0.3683, ..., -0.3762, -0.3470, -0.3178],
...,
[ 1.6384, 1.5362, 1.4194, ..., 1.3902, 1.2880, 1.2442],
[ 1.6092, 1.5508, 1.5070, ..., 1.2150, 0.9814, 0.8501],
[ 1.6092, 1.4778, 1.4924, ..., 0.1201, -0.1280, -0.3908]],
[[-1.3919, -1.3919, -1.3919, ..., -1.5420, -1.5420, -1.5570],
[-1.3469, -1.3469, -1.3469, ..., -1.5270, -1.5120, -1.5270],
[-1.4069, -1.3769, -1.3469, ..., -1.5570, -1.5420, -1.5420],
...,
[-0.3414, -0.4614, -0.5515, ..., -0.6415, -0.7016, -0.7466],
[-0.3414, -0.3864, -0.4914, ..., -0.7316, -0.8666, -0.9267],
[-0.3714, -0.4914, -0.5065, ..., -1.2869, -1.3769, -1.4820]],
[[-0.6555, -0.4990, -0.5417, ..., -1.0110, -0.9256, -0.9541],
[-0.6981, -0.5986, -0.5701, ..., -1.0110, -0.9541, -1.0110],
[-0.6128, -0.5275, -0.4990, ..., -1.0252, -1.0394, -1.0536],
...,
[ 1.3638, 1.3496, 1.1221, ..., 1.1647, 1.0652, 0.9514],
[ 1.3354, 1.1789, 1.2643, ..., 0.9372, 0.7523, 0.6244],
[ 1.3780, 1.3780, 1.2643, ..., -0.1293, -0.5559, -0.7408]]]])}
```
## Expected behavior
Feeding the same image through the official CLIP preprocessing function gives different results:
```
$ docker run --rm -it pytorch/pytorch:1.8.1-cuda11.1-cudnn8-runtime
root@fe597c8c5e9f:/workspace# apt update && apt install git
root@fe597c8c5e9f:/workspace# pip install ftfy regex tqdm
root@fe597c8c5e9f:/workspace# pip install git+https://github.com/openai/CLIP.git
root@fe597c8c5e9f:/workspace# python
Python 3.8.8 (default, Feb 24 2021, 21:46:12)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> import clip
>>> import requests
>>> from PIL import Image
>>> device = "cpu"
>>> model, preprocess = clip.load("ViT-B/32", device=device)
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> inputs = preprocess(Image.open(requests.get(url, stream=True).raw)).unsqueeze(0).to(device)
>>> inputs
tensor([[[[ 0.5873, 0.5873, 0.6165, ..., 0.0617, 0.0471, -0.0259],
[ 0.5727, 0.5727, 0.6603, ..., 0.1201, 0.0763, 0.0909],
[ 0.5873, 0.5435, 0.6165, ..., 0.0325, 0.1201, 0.0617],
...,
[ 1.8719, 1.8573, 1.8719, ..., 1.3902, 1.4340, 1.4194],
[ 1.8281, 1.8719, 1.8427, ..., 1.4486, 1.4340, 1.5070],
[ 1.8573, 1.9011, 1.8281, ..., 1.3756, 1.3610, 1.4486]],
[[-1.3169, -1.3019, -1.3169, ..., -1.4970, -1.4369, -1.4820],
[-1.2418, -1.2718, -1.2268, ..., -1.4369, -1.4669, -1.4519],
[-1.2568, -1.3169, -1.2268, ..., -1.4669, -1.4069, -1.4519],
...,
[ 0.1239, 0.1089, 0.1239, ..., -0.7016, -0.6865, -0.6865],
[ 0.0789, 0.0939, 0.0488, ..., -0.6565, -0.6865, -0.6115],
[ 0.0939, 0.1089, 0.0038, ..., -0.7766, -0.7316, -0.6115]],
[[-0.4848, -0.4137, -0.3853, ..., -0.9541, -0.8545, -0.8545],
[-0.4137, -0.4706, -0.3711, ..., -0.8119, -0.8545, -0.7834],
[-0.3284, -0.4422, -0.3853, ..., -0.8688, -0.8119, -0.8830],
...,
[ 1.5771, 1.6482, 1.6340, ..., 0.9088, 0.9514, 0.8945],
[ 1.6198, 1.6055, 1.6055, ..., 0.8661, 0.8092, 0.7950],
[ 1.6624, 1.6766, 1.5487, ..., 0.7950, 0.8661, 0.8519]]]])
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11992/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11992/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11991 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11991/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11991/comments | https://api.github.com/repos/huggingface/transformers/issues/11991/events | https://github.com/huggingface/transformers/issues/11991 | 909,553,081 | MDU6SXNzdWU5MDk1NTMwODE= | 11,991 | Trainer API | {
"login": "kruthikakr",
"id": 12526620,
"node_id": "MDQ6VXNlcjEyNTI2NjIw",
"avatar_url": "https://avatars.githubusercontent.com/u/12526620?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kruthikakr",
"html_url": "https://github.com/kruthikakr",
"followers_url": "https://api.github.com/users/kruthikakr/followers",
"following_url": "https://api.github.com/users/kruthikakr/following{/other_user}",
"gists_url": "https://api.github.com/users/kruthikakr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kruthikakr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kruthikakr/subscriptions",
"organizations_url": "https://api.github.com/users/kruthikakr/orgs",
"repos_url": "https://api.github.com/users/kruthikakr/repos",
"events_url": "https://api.github.com/users/kruthikakr/events{/privacy}",
"received_events_url": "https://api.github.com/users/kruthikakr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Maybe @borisdayma @sgugger have an idea",
"What is your environment?\r\nCan you try to call `wandb login` in the console before running your script? You typically only have to do it once.\r\nOtherwise you can always use you key and login within your script or use environment variables.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,622 | 1,626 | 1,626 | NONE | null | I am using Trainer API to pretrain BERT model
Start training...
Traceback (most recent call last):
File "/home/kruthika/PycharmProjects/huggingfaceBert/pretrain_transformers_pytorch.py", line 451, in <module>
trainer.train(model_path=model_path)
File "/home/kruthika/PycharmProjects/huggingfaceBert/venv/lib/python3.6/site-packages/transformers/trainer.py", line 1208, in train
self.control = self.callback_handler.on_train_begin(args, self.state, self.control)
File "/home/kruthika/PycharmProjects/huggingfaceBert/venv/lib/python3.6/site-packages/transformers/trainer_callback.py", line 340, in on_train_begin
return self.call_event("on_train_begin", args, state, control)
File "/home/kruthika/PycharmProjects/huggingfaceBert/venv/lib/python3.6/site-packages/transformers/trainer_callback.py", line 388, in call_event
**kwargs,
File "/home/kruthika/PycharmProjects/huggingfaceBert/venv/lib/python3.6/site-packages/transformers/integrations.py", line 717, in on_train_begin
self.setup(args, state, model, **kwargs)
File "/home/kruthika/PycharmProjects/huggingfaceBert/venv/lib/python3.6/site-packages/transformers/integrations.py", line 694, in setup
**init_args,
File "/home/kruthika/PycharmProjects/huggingfaceBert/venv/lib/python3.6/site-packages/wandb/sdk/wandb_init.py", line 747, in init
wi.setup(kwargs)
File "/home/kruthika/PycharmProjects/huggingfaceBert/venv/lib/python3.6/site-packages/wandb/sdk/wandb_init.py", line 154, in setup
wandb_login._login(anonymous=anonymous, force=force, _disable_warning=True)
File "/home/kruthika/PycharmProjects/huggingfaceBert/venv/lib/python3.6/site-packages/wandb/sdk/wandb_login.py", line 238, in _login
wlogin.prompt_api_key()
File "/home/kruthika/PycharmProjects/huggingfaceBert/venv/lib/python3.6/site-packages/wandb/sdk/wandb_login.py", line 174, in prompt_api_key
raise UsageError("api_key not configured (no-tty). call " + directive)
wandb.errors.UsageError: api_key not configured (no-tty). call wandb.login(key=[your_api_key])
Process finished with exit code 1
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11991/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11991/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11990 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11990/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11990/comments | https://api.github.com/repos/huggingface/transformers/issues/11990/events | https://github.com/huggingface/transformers/pull/11990 | 909,532,593 | MDExOlB1bGxSZXF1ZXN0NjYwMDkzNzg5 | 11,990 | Fix examples in VisualBERT docs | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,622 | 1,628 | 1,622 | CONTRIBUTOR | null | This PR fixes the examples in [VisualBERT docs](https://huggingface.co/transformers/master/model_doc/visual_bert.html).
The issue is described in this [comment](https://github.com/huggingface/transformers/pull/10534#issuecomment-853015418) by @NielsRogge.
Requesting review from @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11990/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11990/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11990",
"html_url": "https://github.com/huggingface/transformers/pull/11990",
"diff_url": "https://github.com/huggingface/transformers/pull/11990.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11990.patch",
"merged_at": 1622643173000
} |
https://api.github.com/repos/huggingface/transformers/issues/11989 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11989/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11989/comments | https://api.github.com/repos/huggingface/transformers/issues/11989/events | https://github.com/huggingface/transformers/issues/11989 | 909,348,343 | MDU6SXNzdWU5MDkzNDgzNDM= | 11,989 | EOFError("No valid references for a sentence!") for run_translation example | {
"login": "puraminy",
"id": 5293185,
"node_id": "MDQ6VXNlcjUyOTMxODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5293185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/puraminy",
"html_url": "https://github.com/puraminy",
"followers_url": "https://api.github.com/users/puraminy/followers",
"following_url": "https://api.github.com/users/puraminy/following{/other_user}",
"gists_url": "https://api.github.com/users/puraminy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/puraminy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/puraminy/subscriptions",
"organizations_url": "https://api.github.com/users/puraminy/orgs",
"repos_url": "https://api.github.com/users/puraminy/repos",
"events_url": "https://api.github.com/users/puraminy/events{/privacy}",
"received_events_url": "https://api.github.com/users/puraminy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @puraminy \r\n\r\nCould you debug the `compute_metrics` function? Maybe print/log a few `decoded_labels` that are used as references. Bit hard to guess without running the script.",
"@patil-suraj \r\n\r\nI actually did, and the problem is that the predictions are in English than Persian (the target language)! I consider it a serious bug and I posted more details in:\r\n\r\nhttps://github.com/huggingface/transformers/issues/12010",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,622 | 1,626 | 1,626 | NONE | null | I tried to apply `run_translation` on a dataset I created. I tried to get sure there is no blank target.
example of test file:
`
{"data": [{"translation": {"en": "You're quite impatient with the rest of humanity, in fact.", "fa": "ﺩﺭ ﻭﺎﻘﻋ ﺩﺭ ﻢﻗﺎﺒﻟ ﺱﺎﯾﺭ ﺎﻨﺳﺎﻧ<200c>ﻫﺍ ﮎﺎﻣﻻ ﺐﯾ<200c>ﺣﻮﺼﻠﻫ ﻪﺴﺘﯾﺩ."}}, {"translation": {"en": "Now despite this history of distrust, I still believe that indigenous people can benefit from genetic research.", "fa": "ﺡﺍﻻ ﺏﺍ ﻮﺟﻭﺩ ﺎﯿﻧ ﺐﯾ ﺎﻌﺘﻣﺍﺪﯾ ﺕﺍﺮﯿﺨﯾ ﻢﻧ ﻪﻧﻭﺯ ﻢﻌﺘﻗﺪﻣ ﻡﺭﺪﻣﺎﻧ ﺏﻮﻤﯾ ﻢﯾ ﺕﻭﺎﻨﻧﺩ ﺍﺯ ﺖﺤﻘﯿﻗﺎﺗ ﮋﻨﺘﯿﮐ ﺱﻭﺩ ﺐﺑﺮﻧﺩ."}},
`
However I get this error:
```
File "run_translation.py", line 496, in compute_metrics
result = metric.compute(predictions=decoded_preds, references=decoded_labels)
File "/home/pouramini/miniconda3/lib/python3.7/site-packages/datasets/metric.py", line 40$
, in compute
output = self._compute(predictions=predictions, references=references, **kwargs)
File "/home/pouramini/.cache/huggingface/modules/datasets_modules/metrics/sacrebleu/4dba4$
29caa3766d885f0b9cde070fedb22ac3190c264a6454b8ea6703ddd466/sacrebleu.py", line 128, in _com$
ute
use_effective_order=use_effective_order,
File "/home/pouramini/miniconda3/lib/python3.7/site-packages/sacrebleu/compat.py", line 3$
, in corpus_bleu
sys_stream, ref_streams, use_effective_order=use_effective_order)
File "/home/pouramini/miniconda3/lib/python3.7/site-packages/sacrebleu/metrics/bleu.py", $
ine 286, in corpus_score
raise EOFError("No valid references for a sentence!")
EOFError: No valid references for a sentence!
```
@patil-suraj @patrickvonplaten @lhoestq | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11989/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11989/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11988 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11988/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11988/comments | https://api.github.com/repos/huggingface/transformers/issues/11988/events | https://github.com/huggingface/transformers/issues/11988 | 909,254,444 | MDU6SXNzdWU5MDkyNTQ0NDQ= | 11,988 | Add loss reduction parameter in forward() method | {
"login": "marekrydlewski",
"id": 8159135,
"node_id": "MDQ6VXNlcjgxNTkxMzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8159135?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marekrydlewski",
"html_url": "https://github.com/marekrydlewski",
"followers_url": "https://api.github.com/users/marekrydlewski/followers",
"following_url": "https://api.github.com/users/marekrydlewski/following{/other_user}",
"gists_url": "https://api.github.com/users/marekrydlewski/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marekrydlewski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marekrydlewski/subscriptions",
"organizations_url": "https://api.github.com/users/marekrydlewski/orgs",
"repos_url": "https://api.github.com/users/marekrydlewski/repos",
"events_url": "https://api.github.com/users/marekrydlewski/events{/privacy}",
"received_events_url": "https://api.github.com/users/marekrydlewski/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This has been asked before, but I don't think this is on the roadmap. If you want to use a different loss reduction, you can easily overwrite the model and plug in your custom loss function. See also https://github.com/huggingface/transformers/issues/9625#issuecomment-762167788 and https://github.com/huggingface/transformers/issues/7024#issue-696684075. \r\n\r\nOtherwise, the library would become a bit cluttered with all kinds of custom parameters, so simplicity is favored.\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,622 | 1,626 | 1,626 | NONE | null | # Add loss reduction parameter in forward() method
Add possibility to choose reduction method in loss functions used in `forward` method in most of the models (like all BERT based, `BertForMaskedLM`, `BigBirdForMaskedLM` etc.). Currently, models use losses without the possibility to pass additional parameters.
From PyTorch docs [nn.CrossEntropyLoss ](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html):
> reduction (string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the weighted mean of the output is taken, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Default: 'mean'
## Motivation
Especially in models like `BertForMaskedLM` passing a reduction like `none` instead of the default `mean` can be very handy to check individual losses for tokens.
In our project, we handle this by creating our own class inheriting from hugginface model and overwriting `forward()` method.
## Your contribution
A quick demonstration of the idea on the example of base `Bert` models:
https://github.com/marekrydlewski/transformers/commit/83994b12085b3187f0b80fbb9a6d4a7b5e4bc8de
There are no updated docs, it's just a demonstration.
Then it can be used like:
```
model = BertForMaskedLM(...)
output = model(..., loss_reduction='none')
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11988/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11988/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11987 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11987/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11987/comments | https://api.github.com/repos/huggingface/transformers/issues/11987/events | https://github.com/huggingface/transformers/issues/11987 | 909,236,400 | MDU6SXNzdWU5MDkyMzY0MDA= | 11,987 | Movement Pruning does not achieve expected results | {
"login": "iamweiweishi",
"id": 23145532,
"node_id": "MDQ6VXNlcjIzMTQ1NTMy",
"avatar_url": "https://avatars.githubusercontent.com/u/23145532?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iamweiweishi",
"html_url": "https://github.com/iamweiweishi",
"followers_url": "https://api.github.com/users/iamweiweishi/followers",
"following_url": "https://api.github.com/users/iamweiweishi/following{/other_user}",
"gists_url": "https://api.github.com/users/iamweiweishi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iamweiweishi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iamweiweishi/subscriptions",
"organizations_url": "https://api.github.com/users/iamweiweishi/orgs",
"repos_url": "https://api.github.com/users/iamweiweishi/repos",
"events_url": "https://api.github.com/users/iamweiweishi/events{/privacy}",
"received_events_url": "https://api.github.com/users/iamweiweishi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"that's very low of a score... i imagine it''s on squad.\r\ncould you give me more details about the command you are running?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"I have the same problem. The accuracy is quite low when performing movement pruning. Have you solved it? "
] | 1,622 | 1,630 | 1,626 | NONE | null | Transformer version: master or 4.6.1
tensorflow-gpu == 2.3.1
pytorch == 1.7.1
flax == 0.3.4
I tried to reproduce the results based on the script given in examples/research_projects/movement-pruning.
The experimental results I got are not as good as the paper shows:
06/01/2021 22:55:40 - INFO - __main__ - ***** Running evaluation *****
06/01/2021 22:55:40 - INFO - __main__ - Num examples = 10833
06/01/2021 22:55:40 - INFO - __main__ - Batch size = 32
Evaluating: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 339/339 [01:42<00:00, 3.32it/s]
06/01/2021 22:57:22 - INFO - __main__ - Evaluation done in total 102.215492 secs (0.009436 sec per example)
06/01/2021 22:58:27 - INFO - __main__ - Results: {'exact': 0.33112582781456956, 'f1': 7.122522334334856, 'total': 10570, 'HasAns_exact': 0.33112582781456956, 'HasAns_f1': 7.122522334334856, 'HasAns_total': 10570, 'best_exact': 0.33112582781456956, 'best_exact_thresh': 0.0, 'best_f1': 7.122522334334856, 'best_f1_thresh': 0.0}
I almost followed each step as the 'README.md' says. The only difference I modified is 'BertLayerNorm = torch.nn.LayerNorm' according to [this](https://github.com/huggingface/transformers/issues/10892).
I also noticed the requirements in the [setup ](https://github.com/huggingface/transformers/tree/master/examples/research_projects/movement-pruning#setup) step.
I wonder what should I do to reproduce the same results from the master branch.
Thank you. @VictorSanh | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11987/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11987/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11986 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11986/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11986/comments | https://api.github.com/repos/huggingface/transformers/issues/11986/events | https://github.com/huggingface/transformers/pull/11986 | 909,183,100 | MDExOlB1bGxSZXF1ZXN0NjU5Nzk1MTk0 | 11,986 | [WIP] Add ViLBERT | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | open | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Unstale.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Unstale"
] | 1,622 | 1,648 | null | CONTRIBUTOR | null | # What does this PR do?
This PR adds ViLBERT.
Papers: [Multitask ViLBERT](https://openaccess.thecvf.com/content_CVPR_2020/html/Lu_12-in-1_Multi-Task_Vision_and_Language_Representation_Learning_CVPR_2020_paper.html) , [VilBERT](https://arxiv.org/abs/1908.02265).
GitHub: https://github.com/facebookresearch/vilbert-multi-task
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11986/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11986/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11986",
"html_url": "https://github.com/huggingface/transformers/pull/11986",
"diff_url": "https://github.com/huggingface/transformers/pull/11986.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11986.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11985 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11985/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11985/comments | https://api.github.com/repos/huggingface/transformers/issues/11985/events | https://github.com/huggingface/transformers/pull/11985 | 909,098,346 | MDExOlB1bGxSZXF1ZXN0NjU5NzIxNDM4 | 11,985 | Changed the hidden_size to d_model for XLNET docs | {
"login": "Muktan",
"id": 31338369,
"node_id": "MDQ6VXNlcjMxMzM4MzY5",
"avatar_url": "https://avatars.githubusercontent.com/u/31338369?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Muktan",
"html_url": "https://github.com/Muktan",
"followers_url": "https://api.github.com/users/Muktan/followers",
"following_url": "https://api.github.com/users/Muktan/following{/other_user}",
"gists_url": "https://api.github.com/users/Muktan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Muktan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Muktan/subscriptions",
"organizations_url": "https://api.github.com/users/Muktan/orgs",
"repos_url": "https://api.github.com/users/Muktan/repos",
"events_url": "https://api.github.com/users/Muktan/events{/privacy}",
"received_events_url": "https://api.github.com/users/Muktan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Many models use different names for this and `hidden_size` is a constant property that works for all of them, so we will keep the docstrings updated with this. It also simplifies copy-pasting the docstrings between models.",
"> Many models use different names for this and `hidden_size` is a constant property that works for all of them, so we will keep the docstrings updated with this. It also simplifies copy-pasting the docstrings between models.\r\n\r\nOkay, in that case at least we can mention somewhere in the documentation of the XLNET that hidden_size is same as d_model. Because for someone who starts working with XLNET and has not worked with other models won't understand what is hidden_size, as hidden_size is not described in XLNET docs and is mentioned as one of the dimensions in the output. ",
"How about adding an entry in the glossary page, along with input IDs, attention mask etc?",
"> How about adding an entry in the glossary page, along with input IDs, attention mask etc?\r\n\r\nalong with.. means inside those sections [input IDs section, attention mask section] or creating another section for it?\r\n\r\nIf thinking of adding it to the glossary we can write a line in docs of XLNET, BERT ..(other models) that explanation of some terms can be explored in the glossary. As all the documentations (pandas, pytorch... etc) don't have a glossary section and it can't be expected from the reader that he/she will search for glossary section if they encounter any term not explain in a documentation page.\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,622 | 1,626 | 1,626 | CONTRIBUTOR | null | # What does this PR do?
Changes the use of hidden_size to d_model as XLNET model (unlike BERT and other) uses `d_model` instead of `hidden_size`.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #11938
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@sgugger @NielsRogge
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11985/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11985/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11985",
"html_url": "https://github.com/huggingface/transformers/pull/11985",
"diff_url": "https://github.com/huggingface/transformers/pull/11985.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11985.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11984 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11984/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11984/comments | https://api.github.com/repos/huggingface/transformers/issues/11984/events | https://github.com/huggingface/transformers/pull/11984 | 909,066,440 | MDExOlB1bGxSZXF1ZXN0NjU5NjkyOTMx | 11,984 | [deepspeed] Move code and doc into standalone files | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2659267025,
"node_id": "MDU6TGFiZWwyNjU5MjY3MDI1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed",
"name": "DeepSpeed",
"color": "4D34F7",
"default": false,
"description": ""
}
] | closed | false | null | [] | [] | 1,622 | 1,622 | 1,622 | CONTRIBUTOR | null | Deepspeed integration is no longer bound to HF Trainer and its docs have grown too big to be a subsection of the Trainer docs. This PR:
- creates `transformers.deepspeed` and migrates all the code and references to it
- moves docs to `deepspeed.rst`
Note to @sgugger - I tried to make this easy to review by not making any changes to any content of code or text. Other then the updated imports in the code, the only change is the preamble section of `deepspeed.rst` - there is no need to re-review the rest unless you'd like to. I flagged that new text below.
And I also added a poor-man style redirect links from most of the previous sections in `trainer.rst` so that the old links still work. Well, had to add anchors to sections in the new doc for this to work.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11984/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11984/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11984",
"html_url": "https://github.com/huggingface/transformers/pull/11984",
"diff_url": "https://github.com/huggingface/transformers/pull/11984.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11984.patch",
"merged_at": 1622652960000
} |
https://api.github.com/repos/huggingface/transformers/issues/11983 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11983/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11983/comments | https://api.github.com/repos/huggingface/transformers/issues/11983/events | https://github.com/huggingface/transformers/pull/11983 | 909,058,706 | MDExOlB1bGxSZXF1ZXN0NjU5Njg1OTM1 | 11,983 | Bump urllib3 from 1.25.8 to 1.26.5 in /examples/research_projects/lxmert | {
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
} | [
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] | closed | false | null | [] | [] | 1,622 | 1,622 | 1,622 | CONTRIBUTOR | null | Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.25.8 to 1.26.5.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/urllib3/urllib3/releases">urllib3's releases</a>.</em></p>
<blockquote>
<h2>1.26.5</h2>
<p>:warning: <strong>IMPORTANT: urllib3 v2.0 will drop support for Python 2</strong>: <a href="https://urllib3.readthedocs.io/en/latest/v2-roadmap.html">Read more in the v2.0 Roadmap</a></p>
<ul>
<li>Fixed deprecation warnings emitted in Python 3.10.</li>
<li>Updated vendored <code>six</code> library to 1.16.0.</li>
<li>Improved performance of URL parser when splitting the authority component.</li>
</ul>
<p><strong>If you or your organization rely on urllib3 consider supporting us via <a href="https://github.com/sponsors/urllib3">GitHub Sponsors</a></strong></p>
<h2>1.26.4</h2>
<p>:warning: <strong>IMPORTANT: urllib3 v2.0 will drop support for Python 2</strong>: <a href="https://urllib3.readthedocs.io/en/latest/v2-roadmap.html">Read more in the v2.0 Roadmap</a></p>
<ul>
<li>Changed behavior of the default <code>SSLContext</code> when connecting to HTTPS proxy during HTTPS requests. The default <code>SSLContext</code> now sets <code>check_hostname=True</code>.</li>
</ul>
<p><strong>If you or your organization rely on urllib3 consider supporting us via <a href="https://github.com/sponsors/urllib3">GitHub Sponsors</a></strong></p>
<h2>1.26.3</h2>
<p>:warning: <strong>IMPORTANT: urllib3 v2.0 will drop support for Python 2</strong>: <a href="https://urllib3.readthedocs.io/en/latest/v2-roadmap.html">Read more in the v2.0 Roadmap</a></p>
<ul>
<li>
<p>Fixed bytes and string comparison issue with headers (Pull <a href="https://github-redirect.dependabot.com/urllib3/urllib3/issues/2141">#2141</a>)</p>
</li>
<li>
<p>Changed <code>ProxySchemeUnknown</code> error message to be more actionable if the user supplies a proxy URL without a scheme (Pull <a href="https://github-redirect.dependabot.com/urllib3/urllib3/issues/2107">#2107</a>)</p>
</li>
</ul>
<p><strong>If you or your organization rely on urllib3 consider supporting us via <a href="https://github.com/sponsors/urllib3">GitHub Sponsors</a></strong></p>
<h2>1.26.2</h2>
<p>:warning: <strong>IMPORTANT: urllib3 v2.0 will drop support for Python 2</strong>: <a href="https://urllib3.readthedocs.io/en/latest/v2-roadmap.html">Read more in the v2.0 Roadmap</a></p>
<ul>
<li>Fixed an issue where <code>wrap_socket</code> and <code>CERT_REQUIRED</code> wouldn't be imported properly on Python 2.7.8 and earlier (Pull <a href="https://github-redirect.dependabot.com/urllib3/urllib3/issues/2052">#2052</a>)</li>
</ul>
<h2>1.26.1</h2>
<p>:warning: <strong>IMPORTANT: urllib3 v2.0 will drop support for Python 2</strong>: <a href="https://urllib3.readthedocs.io/en/latest/v2-roadmap.html">Read more in the v2.0 Roadmap</a></p>
<ul>
<li>Fixed an issue where two <code>User-Agent</code> headers would be sent if a <code>User-Agent</code> header key is passed as <code>bytes</code> (Pull <a href="https://github-redirect.dependabot.com/urllib3/urllib3/issues/2047">#2047</a>)</li>
</ul>
<h2>1.26.0</h2>
<p>:warning: <strong>IMPORTANT: urllib3 v2.0 will drop support for Python 2</strong>: <a href="https://urllib3.readthedocs.io/en/latest/v2-roadmap.html">Read more in the v2.0 Roadmap</a></p>
<ul>
<li>
<p>Added support for HTTPS proxies contacting HTTPS servers (Pull <a href="https://github-redirect.dependabot.com/urllib3/urllib3/issues/1923">#1923</a>, Pull <a href="https://github-redirect.dependabot.com/urllib3/urllib3/issues/1806">#1806</a>)</p>
</li>
<li>
<p>Deprecated negotiating TLSv1 and TLSv1.1 by default. Users that
still wish to use TLS earlier than 1.2 without a deprecation warning
should opt-in explicitly by setting <code>ssl_version=ssl.PROTOCOL_TLSv1_1</code> (Pull <a href="https://github-redirect.dependabot.com/urllib3/urllib3/issues/2002">#2002</a>)
<strong>Starting in urllib3 v2.0: Connections that receive a <code>DeprecationWarning</code> will fail</strong></p>
</li>
<li>
<p>Deprecated <code>Retry</code> options <code>Retry.DEFAULT_METHOD_WHITELIST</code>, <code>Retry.DEFAULT_REDIRECT_HEADERS_BLACKLIST</code>
and <code>Retry(method_whitelist=...)</code> in favor of <code>Retry.DEFAULT_ALLOWED_METHODS</code>,
<code>Retry.DEFAULT_REMOVE_HEADERS_ON_REDIRECT</code>, and <code>Retry(allowed_methods=...)</code>
(Pull <a href="https://github-redirect.dependabot.com/urllib3/urllib3/issues/2000">#2000</a>) <strong>Starting in urllib3 v2.0: Deprecated options will be removed</strong></p>
</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/urllib3/urllib3/blob/main/CHANGES.rst">urllib3's changelog</a>.</em></p>
<blockquote>
<h2>1.26.5 (2021-05-26)</h2>
<ul>
<li>Fixed deprecation warnings emitted in Python 3.10.</li>
<li>Updated vendored <code>six</code> library to 1.16.0.</li>
<li>Improved performance of URL parser when splitting
the authority component.</li>
</ul>
<h2>1.26.4 (2021-03-15)</h2>
<ul>
<li>Changed behavior of the default <code>SSLContext</code> when connecting to HTTPS proxy
during HTTPS requests. The default <code>SSLContext</code> now sets <code>check_hostname=True</code>.</li>
</ul>
<h2>1.26.3 (2021-01-26)</h2>
<ul>
<li>
<p>Fixed bytes and string comparison issue with headers (Pull <a href="https://github-redirect.dependabot.com/urllib3/urllib3/issues/2141">#2141</a>)</p>
</li>
<li>
<p>Changed <code>ProxySchemeUnknown</code> error message to be
more actionable if the user supplies a proxy URL without
a scheme. (Pull <a href="https://github-redirect.dependabot.com/urllib3/urllib3/issues/2107">#2107</a>)</p>
</li>
</ul>
<h2>1.26.2 (2020-11-12)</h2>
<ul>
<li>Fixed an issue where <code>wrap_socket</code> and <code>CERT_REQUIRED</code> wouldn't
be imported properly on Python 2.7.8 and earlier (Pull <a href="https://github-redirect.dependabot.com/urllib3/urllib3/issues/2052">#2052</a>)</li>
</ul>
<h2>1.26.1 (2020-11-11)</h2>
<ul>
<li>Fixed an issue where two <code>User-Agent</code> headers would be sent if a
<code>User-Agent</code> header key is passed as <code>bytes</code> (Pull <a href="https://github-redirect.dependabot.com/urllib3/urllib3/issues/2047">#2047</a>)</li>
</ul>
<h2>1.26.0 (2020-11-10)</h2>
<ul>
<li>
<p><strong>NOTE: urllib3 v2.0 will drop support for Python 2</strong>.
<code>Read more in the v2.0 Roadmap <https://urllib3.readthedocs.io/en/latest/v2-roadmap.html></code>_.</p>
</li>
<li>
<p>Added support for HTTPS proxies contacting HTTPS servers (Pull <a href="https://github-redirect.dependabot.com/urllib3/urllib3/issues/1923">#1923</a>, Pull <a href="https://github-redirect.dependabot.com/urllib3/urllib3/issues/1806">#1806</a>)</p>
</li>
<li>
<p>Deprecated negotiating TLSv1 and TLSv1.1 by default. Users that
still wish to use TLS earlier than 1.2 without a deprecation warning</p>
</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/urllib3/urllib3/commit/d1616473df94b94f0f5ad19d2a6608cfe93b7cdf"><code>d161647</code></a> Release 1.26.5</li>
<li><a href="https://github.com/urllib3/urllib3/commit/2d4a3fee6de2fa45eb82169361918f759269b4ec"><code>2d4a3fe</code></a> Improve performance of sub-authority splitting in URL</li>
<li><a href="https://github.com/urllib3/urllib3/commit/2698537d52f8ff1f0bbb1d45cf018b118e91f637"><code>2698537</code></a> Update vendored six to 1.16.0</li>
<li><a href="https://github.com/urllib3/urllib3/commit/07bed791e9c391d8bf12950f76537dc3c6f90550"><code>07bed79</code></a> Fix deprecation warnings for Python 3.10 ssl module</li>
<li><a href="https://github.com/urllib3/urllib3/commit/d725a9b56bb8baf87c9e6eee0e9edf010034b63b"><code>d725a9b</code></a> Add Python 3.10 to GitHub Actions</li>
<li><a href="https://github.com/urllib3/urllib3/commit/339ad34c677c98fd9ad008de1d8bbeb9dbf34381"><code>339ad34</code></a> Use pytest==6.2.4 on Python 3.10+</li>
<li><a href="https://github.com/urllib3/urllib3/commit/f271c9c3149e20d7feffb6429b135bbb6c09ddf4"><code>f271c9c</code></a> Apply latest Black formatting</li>
<li><a href="https://github.com/urllib3/urllib3/commit/1884878aac87ef0494b282e940c32c24ee917d52"><code>1884878</code></a> [1.26] Properly proxy EOF on the SSLTransport test suite</li>
<li><a href="https://github.com/urllib3/urllib3/commit/a8913042b676c510e94fc2b097f6b514ae11a537"><code>a891304</code></a> Release 1.26.4</li>
<li><a href="https://github.com/urllib3/urllib3/commit/8d65ea1ecf6e2cdc27d42124e587c1b83a3118b0"><code>8d65ea1</code></a> Merge pull request from GHSA-5phf-pp7p-vc2r</li>
<li>Additional commits viewable in <a href="https://github.com/urllib3/urllib3/compare/1.25.8...1.26.5">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11983/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11983/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11983",
"html_url": "https://github.com/huggingface/transformers/pull/11983",
"diff_url": "https://github.com/huggingface/transformers/pull/11983.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11983.patch",
"merged_at": 1622619620000
} |
https://api.github.com/repos/huggingface/transformers/issues/11982 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11982/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11982/comments | https://api.github.com/repos/huggingface/transformers/issues/11982/events | https://github.com/huggingface/transformers/issues/11982 | 909,015,604 | MDU6SXNzdWU5MDkwMTU2MDQ= | 11,982 | AttributeError: 'GPT2LMHeadModel' object has no attribute 'get_encoder' | {
"login": "goji-patai",
"id": 55723262,
"node_id": "MDQ6VXNlcjU1NzIzMjYy",
"avatar_url": "https://avatars.githubusercontent.com/u/55723262?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/goji-patai",
"html_url": "https://github.com/goji-patai",
"followers_url": "https://api.github.com/users/goji-patai/followers",
"following_url": "https://api.github.com/users/goji-patai/following{/other_user}",
"gists_url": "https://api.github.com/users/goji-patai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/goji-patai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/goji-patai/subscriptions",
"organizations_url": "https://api.github.com/users/goji-patai/orgs",
"repos_url": "https://api.github.com/users/goji-patai/repos",
"events_url": "https://api.github.com/users/goji-patai/events{/privacy}",
"received_events_url": "https://api.github.com/users/goji-patai/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi there,\r\nthis is because in the config the value `is_encoder_decoder` is actually a string `\"False\"` which evaluates to `True` in python, and hence `generate` treats this model as an encoder-decoder model. `is_encoder_decoder` should be set to boolean false.",
"Ahh! Thank you. "
] | 1,622 | 1,622 | 1,622 | NONE | null | Hi,
I am trying to generate text from a GPT2 model I have trained from scratch using custom english language data.
OS: Windows 10
transformers 3.5.0
Pytorch 1.4.0 (upgrading torch did not help)
Tensorflow 2.2.0
GPT2LMHeadModel was trained using run_language_modeling.py with the following config
{
"_num_labels": 2,
"activation_function": "gelu_new",
"architectures": [
"GPT2LMHeadModel"
],
"attn_pdrop": 0.1,
"bos_token_id": 50256,
"do_sample": "false",
"early_stopping": "false",
"embd_pdrop": 0.1,
"eos_token_id": 50256,
"gradient_checkpointing": false,
"initializer_range": 0.02,
"is_decoder": "false",
"is_encoder_decoder": "false",
"layer_norm_epsilon": 1e-05,
"model_type": "gpt2",
"n_ctx": 1024,
"n_embd": 768,
"n_head": 12,
"n_inner": null,
"n_layer": 12,
"n_positions": 1024,
"output_attentions": "false",
"output_hidden_states": "false",
"output_past": "true",
"resid_pdrop": 0.1,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": "true",
"summary_type": "cls_index",
"summary_use_proj": "true",
"torchscript": "false",
"use_bfloat16": "false",
"vocab_size": 49051
}
Training ended without errors. I tried to generate text from the model using the following script(https://dejanbatanjac.github.io/gpt2-example/):
import random
import numpy as np
import torch
from transformers import GPT2LMHeadModel, GPT2Tokenizer
gpt2_model = GPT2LMHeadModel.from_pretrained("LOCAL PATH TO MODEL")
gpt2_tokenizer = GPT2Tokenizer.from_pretrained("LOCAL PATH TO MODEL TOKENIZER")
seed = random.randint(0, 13)
np.random.seed(seed)
torch.random.manual_seed(seed)
torch.cuda.manual_seed(seed)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
text = """Sample prompt text """
input_ids = torch.tensor(gpt2_tokenizer.encode(text, add_special_tokens=True)).unsqueeze(0) # bs=1
gpt2_model.to(device)
gpt2_model.eval()
outputs = gpt2_model.generate(
input_ids.to(device),
max_length=500,
do_sample=True,
top_k=15,
temperature=0.65
)
print(gpt2_tokenizer.decode(outputs[0], skip_special_tokens=True))
outputs.shape,outputs[0].shape # (torch.Size([1, 500]), torch.Size([500]))
Running the above generation script, I get the following error trace
2021-06-01 22:38:44.602457: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
Traceback (most recent call last):
File "GPT2_Text_gen_4.py", line 26, in <module>
temperature=0.65
File "C:\Users\gojeb\anaconda3\envs\TF_Pytorch_Transformers\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "C:\Users\gojeb\anaconda3\envs\TF_Pytorch_Transformers\lib\site-packages\transformers\generation_utils.py", line 462, in generate
model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(input_ids, model_kwargs)
File "C:\Users\gojeb\anaconda3\envs\TF_Pytorch_Transformers\lib\site-packages\transformers\generation_utils.py", line 80, in _prepare_encoder_decoder_kwargs_for_generation
encoder = self.get_encoder()
File "C:\Users\gojeb\anaconda3\envs\TF_Pytorch_Transformers\lib\site-packages\torch\nn\modules\module.py", line 948, in __getattr__
type(self).__name__, name))
AttributeError: 'GPT2LMHeadModel' object has no attribute 'get_encoder'
Please point me in the correct direction to solve this problem.
Thank you | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11982/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11982/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11981 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11981/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11981/comments | https://api.github.com/repos/huggingface/transformers/issues/11981/events | https://github.com/huggingface/transformers/pull/11981 | 908,859,122 | MDExOlB1bGxSZXF1ZXN0NjU5NTAxNjc3 | 11,981 | Rewrite ProphetNet to adapt converting ONNX friendly | {
"login": "jiafatom",
"id": 30608893,
"node_id": "MDQ6VXNlcjMwNjA4ODkz",
"avatar_url": "https://avatars.githubusercontent.com/u/30608893?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiafatom",
"html_url": "https://github.com/jiafatom",
"followers_url": "https://api.github.com/users/jiafatom/followers",
"following_url": "https://api.github.com/users/jiafatom/following{/other_user}",
"gists_url": "https://api.github.com/users/jiafatom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiafatom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiafatom/subscriptions",
"organizations_url": "https://api.github.com/users/jiafatom/orgs",
"repos_url": "https://api.github.com/users/jiafatom/repos",
"events_url": "https://api.github.com/users/jiafatom/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiafatom/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This PR is originally from https://github.com/huggingface/transformers/pull/8675\r\nSince ProphetNet gets refactored, now this PR is enough.\r\n@patrickvonplaten, you are previously ok with this change, could you please sign off? or perhaps @mfuntowicz ?\r\nThanks!",
"Could someone take a look? Thank you! @qiweizhen @patrickvonplaten @Zhylkaaa @mfuntowicz",
"> Sorry, I am not really familiar with onnx, but for me this code is even more elegant than using `new` (which I guess is now discouraged in favor of `new_*`).\r\n> But wouldn't it be more flexible to use `dtype=hidden_states.dtype` instead of `np.float32` to be sure that results are identical?\r\n\r\nThanks, done.\r\n",
"@patrickvonplaten the CI failure \"check_code_quality\" says:\r\n\r\nwould reformat src/transformers/models/prophetnet/modeling_prophetnet.py\r\nOh no! 💥 💔 💥\r\n1 file would be reformatted, 909 files would be left unchanged.\r\n\r\nHow to reformat that? Thanks.",
"You can simply run `make style`",
"> You can simply run `make style`\r\n\r\n@patrickvonplaten I got the following errors when I run `make style`. I am using Ubuntu 20.04. Thanks\r\n~/dev/transformers$ make style\r\nblack examples tests src utils\r\nmake: black: Command not found\r\nmake: *** [Makefile:54: style] Error 127\r\n",
"do you have black installed? (if not I guess you should use `python -m pip install black`)",
"> do you have black installed? (if not I guess you should use `python -m pip install black`)\r\n\r\nThanks, it works!",
"@patrickvonplaten could you please approve this PR and merge? The CI failure seems unrelated. Thank you!",
"Hey @jiafatom,\r\n\r\nCould you run `make style` to get the `check_code_quality` test passing? ",
"@patrickvonplaten I actually ran `make style` and `make quality` (I guess that's what this test is using), but there are no warnings or errors, so I don't know what this is about either.",
"> @patrickvonplaten actually ran `make style` and `make quality` (I guess that's what this test is using), but there are no warnings or errors, so I don't know what this is about either.\r\n\r\nThanks, yes, actually I ran `make style` several times, it is fine in my local dev box, but still see this error in CI."
] | 1,622 | 1,624 | 1,624 | CONTRIBUTOR | null | # What does this PR do?
We want to convert ProphetNet (pytorch model) to ONNX, but it needs some source code change to adapt it. The current code cannot convert to ONNX because torch.new generates constant dimension for Tensor in IR graph, which is not suitable if we want to do dynamic input axes for the converter. So we use torch.full instead.
This PR does not (should not) change any model behavior.
Fixes # (issue)
After this PR, the model can be converted to ONNX.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@qiweizhen @patrickvonplaten @Zhylkaaa
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11981/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11981/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11981",
"html_url": "https://github.com/huggingface/transformers/pull/11981",
"diff_url": "https://github.com/huggingface/transformers/pull/11981.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11981.patch",
"merged_at": 1624444458000
} |
https://api.github.com/repos/huggingface/transformers/issues/11980 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11980/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11980/comments | https://api.github.com/repos/huggingface/transformers/issues/11980/events | https://github.com/huggingface/transformers/pull/11980 | 908,634,193 | MDExOlB1bGxSZXF1ZXN0NjU5Mjk3NDMy | 11,980 | [Trainer] add train loss and flops metrics reports | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hmm, it broke several trainer tests:\r\n```\r\nFAILED tests/test_trainer.py::TrainerIntegrationTest::test_can_resume_training\r\nFAILED tests/test_trainer.py::TrainerIntegrationTest::test_resume_training_with_frozen_params\r\nFAILED tests/test_trainer.py::TrainerIntegrationTest::test_resume_training_with_gradient_accumulation\r\nFAILED tests/test_trainer_callback.py::TrainerCallbackTest::test_event_flow\r\n```\r\n\r\nThe metrics log for `train_loss` is now different:\r\n```\r\n self.assertEqual(log, log1)\r\nE AssertionError: {'tot[14 chars].0, 'train_loss': 6.380087534586589, 'epoch': 3.0, 'step': 24} != {'tot[14 chars].0, 'train_loss': 3.9063542683919272, 'epoch': 3.0, 'step': 24}\r\nE {'epoch': 3.0,\r\nE 'step': 24,\r\nE 'total_flos': 4608.0,\r\nE - 'train_loss': 6.380087534586589}\r\nE + 'train_loss': 3.9063542683919272}\r\n```\r\n\r\nSo either there was already `train_loss` in metrics log, but it was saving a different value, So me \"fixing\" it to report the value `TrainOutput` returns broke the test. or it wasn't there and it wasn't comparing the loss and now it does and it isn't the same. Need to check.\r\n\r\n**edit:** proved to be the latter."
] | 1,622 | 1,622 | 1,622 | CONTRIBUTOR | null | The train wasn't reporting loss metrics (and flops it seems), this PR fixes that. Now we get:
```
***** train metrics *****
epoch = 1.0
total_flos = 405GF
train_loss = 2.9435
train_runtime = 0:00:01.75
train_samples = 20
train_samples_per_second = 11.401
train_steps_per_second = 1.14
```
Also moves metrics logging to when all metrics have been updated.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11980/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11980/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11980",
"html_url": "https://github.com/huggingface/transformers/pull/11980",
"diff_url": "https://github.com/huggingface/transformers/pull/11980.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11980.patch",
"merged_at": 1622588311000
} |
https://api.github.com/repos/huggingface/transformers/issues/11979 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11979/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11979/comments | https://api.github.com/repos/huggingface/transformers/issues/11979/events | https://github.com/huggingface/transformers/pull/11979 | 908,583,640 | MDExOlB1bGxSZXF1ZXN0NjU5MjUzNzg3 | 11,979 | Typo in usage example, changed to device instead of torch_device | {
"login": "albertovilla",
"id": 1217687,
"node_id": "MDQ6VXNlcjEyMTc2ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1217687?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertovilla",
"html_url": "https://github.com/albertovilla",
"followers_url": "https://api.github.com/users/albertovilla/followers",
"following_url": "https://api.github.com/users/albertovilla/following{/other_user}",
"gists_url": "https://api.github.com/users/albertovilla/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertovilla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertovilla/subscriptions",
"organizations_url": "https://api.github.com/users/albertovilla/orgs",
"repos_url": "https://api.github.com/users/albertovilla/repos",
"events_url": "https://api.github.com/users/albertovilla/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertovilla/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,622 | 1,622 | 1,622 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? Not applicable
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11979/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11979/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11979",
"html_url": "https://github.com/huggingface/transformers/pull/11979",
"diff_url": "https://github.com/huggingface/transformers/pull/11979.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11979.patch",
"merged_at": 1622573929000
} |
https://api.github.com/repos/huggingface/transformers/issues/11978 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11978/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11978/comments | https://api.github.com/repos/huggingface/transformers/issues/11978/events | https://github.com/huggingface/transformers/issues/11978 | 908,557,391 | MDU6SXNzdWU5MDg1NTczOTE= | 11,978 | No package metadata found for tqdm while generating exe | {
"login": "MK096",
"id": 20142735,
"node_id": "MDQ6VXNlcjIwMTQyNzM1",
"avatar_url": "https://avatars.githubusercontent.com/u/20142735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MK096",
"html_url": "https://github.com/MK096",
"followers_url": "https://api.github.com/users/MK096/followers",
"following_url": "https://api.github.com/users/MK096/following{/other_user}",
"gists_url": "https://api.github.com/users/MK096/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MK096/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MK096/subscriptions",
"organizations_url": "https://api.github.com/users/MK096/orgs",
"repos_url": "https://api.github.com/users/MK096/repos",
"events_url": "https://api.github.com/users/MK096/events{/privacy}",
"received_events_url": "https://api.github.com/users/MK096/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, \r\nCan you please tell me how you managed to solve this issue ?"
] | 1,622 | 1,684 | 1,622 | NONE | null | Hi,
Whenever i try to make executable file i always get an error : No package metadata found for tqdm.
I've tried hidden-import tqdm but it didn't work. In python IDLE it works fine but while running exe it doesn't work.
I am using Win 10 (64-bit)
Code:
_import eel
import tqdm # i thought of importing it to see wheter this solves anything..but it didn't help
from transformers import pipeline
print("Loaded")
eel.init('Web')
// try...Catch to open index.html_ [UI]

Any help will be really appreciated.
Thank You | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11978/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11978/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11977 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11977/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11977/comments | https://api.github.com/repos/huggingface/transformers/issues/11977/events | https://github.com/huggingface/transformers/issues/11977 | 908,556,810 | MDU6SXNzdWU5MDg1NTY4MTA= | 11,977 | T5-Training Arguments | {
"login": "kevin3567",
"id": 31675719,
"node_id": "MDQ6VXNlcjMxNjc1NzE5",
"avatar_url": "https://avatars.githubusercontent.com/u/31675719?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kevin3567",
"html_url": "https://github.com/kevin3567",
"followers_url": "https://api.github.com/users/kevin3567/followers",
"following_url": "https://api.github.com/users/kevin3567/following{/other_user}",
"gists_url": "https://api.github.com/users/kevin3567/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kevin3567/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kevin3567/subscriptions",
"organizations_url": "https://api.github.com/users/kevin3567/orgs",
"repos_url": "https://api.github.com/users/kevin3567/repos",
"events_url": "https://api.github.com/users/kevin3567/events{/privacy}",
"received_events_url": "https://api.github.com/users/kevin3567/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The `decoder_input_ids` should not be equal to the `labels`, but instead equal to the labels shifted one position to the right.\r\n\r\nThis is because the decoder of T5 processes the text autoregressively (a fancy word to say, from left to right). So suppose you want the decoder of T5 to generate the sentence \"Belgium is gonna win the European Football Championship\". Then first, we provide the token `\"<s>\"` to the decoder, to mark the beginning of a sentence. The corresponding label will be `\"Belgium\"`. Next, we provide `[\"<s>\", \"Belgium\"]` to the decoder and the label will be `\"is\"`. Next, we provide `[\"<s>\", \"Belgium\", \"is\"]` to the decoder, and the label will be `\"gonna\"`, and so on. So as you can see, we have the following `decoder input ids` and `labels`:\r\n\r\ndecoder_input_ids = [`\"<s>\"`, `\"Belgium\"`, `\"is\"`, `\"gonna\"`, `\"win\"`, ...]\r\nlabels = [`\"Belgium\"`, `\"is\",` `\"gonna\"`, `\"win\"`, ...] \r\n\r\n=> so as you can see, the decoder_input_ids are equal to the labels, but shifted one position to the right. That's why have\r\n\r\n`decoder_input_ids = self._shift_right(labels)` in the code of `modeling_t5.py`, as can be seen [here](https://github.com/huggingface/transformers/blob/47a98fc4cb6a561576309a57b315b042977d194c/src/transformers/models/t5/modeling_t5.py#L1583). If you don't specify the decoder_input_ids yourself, the model will create them for you (based on the labels).",
"Okay, that makes sense. Thanks.",
"Hi, \r\n\r\nI am currently some baseline models and I have a follow up question regarding the _decoder_input_ids_ and the _labels_ argument. \r\n\r\nI have read from another tutorial, that for the EncoderDecoder model (another Seq2Seq model), the decoder_input_ids and the labels should be copies of each other. Specifically, (retrieved from https://github.com/utkd/encdecmodel-hf/blob/master/train_model.py, Line 117-124):\r\n\r\n> en_input = en_input.to(device)\r\n> de_output = de_output.to(device)\r\n> en_masks = en_masks.to(device)\r\n> de_masks = de_masks.to(device)\r\n> \r\n> lm_labels = de_output.clone()\r\n> out = model(input_ids=en_input, attention_mask=en_masks,\r\n> decoder_input_ids=de_output, decoder_attention_mask=de_masks,lm_labels=lm_labels)\r\n\r\n\r\nThis is different from how T5 handles _decoder_input_ids_ and the _labels_, which requires the _lm_labels_ (replaced by _labels_ in later hugginface transformers version) to be shifted right.\r\n\r\nAssuming that the EncoderDecoder tutorial is correct, does this mean that for different Seq2Seq models, how _decoder_input_ids_ and the _labels_ should be prepared for model training are also different? \r\n",
"Looking at the code example in the [documentation of the EncoderDecoder model](https://huggingface.co/transformers/model_doc/encoderdecoder.html), it looks like there, the `decoder_input_ids` are indeed set equal to the `labels`. This might help you: https://github.com/huggingface/transformers/issues/6487#issuecomment-674172930.\r\n\r\nI think they all train in the same way, but the `EncoderDecoder` model itself takes care of adding the BOS token (= beginning of sequence) to the `decoder_input_ids`. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,622 | 1,626 | 1,626 | NONE | null | Hi,
I am currently training a T5 model. I have noticed that to compute the loss for the T5 model during training, I should not assign decoder_input argument, but only use the labels argument. Otherwise, the trained model will generate gibberish. So for example, this does not work:
> labels = decoder_input_ids
> outputs = model(input_ids=input_ids,
> attention_mask=attention_mask,
> decoder_input_ids = decoder_input_ids
> decoder_attention_mask = decoder_attention_mask
> labels=labels)
But this will work:
> labels = decoder_input_ids
> labels[labels[:, :] == tokenizer.pad_token_id] = -100 # do label mask
> outputs = model(input_ids=input_ids,
> attention_mask=attention_mask,
> labels=labels)
Why is this the case? What is the difference between the two code when computing loss? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11977/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11977/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11976 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11976/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11976/comments | https://api.github.com/repos/huggingface/transformers/issues/11976/events | https://github.com/huggingface/transformers/pull/11976 | 908,555,323 | MDExOlB1bGxSZXF1ZXN0NjU5MjI5ODE5 | 11,976 | Update return introduction of `forward` method | {
"login": "kouyk",
"id": 1729497,
"node_id": "MDQ6VXNlcjE3Mjk0OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1729497?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kouyk",
"html_url": "https://github.com/kouyk",
"followers_url": "https://api.github.com/users/kouyk/followers",
"following_url": "https://api.github.com/users/kouyk/following{/other_user}",
"gists_url": "https://api.github.com/users/kouyk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kouyk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kouyk/subscriptions",
"organizations_url": "https://api.github.com/users/kouyk/orgs",
"repos_url": "https://api.github.com/users/kouyk/repos",
"events_url": "https://api.github.com/users/kouyk/events{/privacy}",
"received_events_url": "https://api.github.com/users/kouyk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Requesting review from @sgugger",
"Looks like you just need to run `make style` on your branch and we should be good to merge!",
"I have fixed the styling issues!",
"Thanks!"
] | 1,622 | 1,622 | 1,622 | CONTRIBUTOR | null | # What does this PR do?
Make it clear that the `forward` method now returns a dict instead of tuple.
PR #8530 switch the default value of `return_dict` in configurations to `True`. This caused some older code that relied on `return_dict` being set to `False` to break since assigned a dictionary to tuples can have unexpected outcomes such as receiving strings instead.
The phrasing of the introduction under the return section of the `forward` method currently state that a dictionary will be returned only if `return_dict=True` is passed or that `config.return_dict` is set to `True`. This is no longer valid ever since the default configuration changed, thus it will be beneficial for the readers to update this portion to indicate that those values need to be `False` for a tuple to be returned.
This will likely save readers some time when adapting old code :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11976/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11976/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11976",
"html_url": "https://github.com/huggingface/transformers/pull/11976",
"diff_url": "https://github.com/huggingface/transformers/pull/11976.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11976.patch",
"merged_at": 1622652789000
} |
https://api.github.com/repos/huggingface/transformers/issues/11975 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11975/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11975/comments | https://api.github.com/repos/huggingface/transformers/issues/11975/events | https://github.com/huggingface/transformers/issues/11975 | 908,548,504 | MDU6SXNzdWU5MDg1NDg1MDQ= | 11,975 | It seems not able to add the args "repetition_penalty" when running the code run_summarization.py for prediction. | {
"login": "moooooser999",
"id": 64945774,
"node_id": "MDQ6VXNlcjY0OTQ1Nzc0",
"avatar_url": "https://avatars.githubusercontent.com/u/64945774?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/moooooser999",
"html_url": "https://github.com/moooooser999",
"followers_url": "https://api.github.com/users/moooooser999/followers",
"following_url": "https://api.github.com/users/moooooser999/following{/other_user}",
"gists_url": "https://api.github.com/users/moooooser999/gists{/gist_id}",
"starred_url": "https://api.github.com/users/moooooser999/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/moooooser999/subscriptions",
"organizations_url": "https://api.github.com/users/moooooser999/orgs",
"repos_url": "https://api.github.com/users/moooooser999/repos",
"events_url": "https://api.github.com/users/moooooser999/events{/privacy}",
"received_events_url": "https://api.github.com/users/moooooser999/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes, that's right. The `run_summarization` script does not accept all `generate` arguments. Instead, those arguments should be set in the `config`. For your use-case, it should easy to modify the script to accept these args and then pass those to `config` so `generate` can directly access those.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,622 | 1,626 | 1,626 | NONE | null | I am using the summarization code provided in example/pytorch/summarization/run_summarization.py
However, I could not add the argument "repetition_penalty" when generating.
After tracing the source code of Seq2SeqTrainer(), I found that the function `predict()` does not take the argument as input.
It might be helpful if the arguments the function takes could be much more flexible. For now, it only takes `max_len` and `num_beams` for the function `generate()` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11975/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11975/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11974 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11974/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11974/comments | https://api.github.com/repos/huggingface/transformers/issues/11974/events | https://github.com/huggingface/transformers/pull/11974 | 908,329,850 | MDExOlB1bGxSZXF1ZXN0NjU5MDM2NDMw | 11,974 | Fix loss reporting with deepspeed | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This is exactly what deepspeed already does. Please see:\r\n\r\nhttps://github.com/microsoft/DeepSpeed/blob/master/deepspeed/runtime/engine.py#L1142-L1143\r\n\r\nThe scaling function is:\r\n```\r\n def _scale_loss_by_gas(self, prescaled_loss):\r\n if isinstance(prescaled_loss, torch.Tensor):\r\n scaled_loss = prescaled_loss / self.gradient_accumulation_steps()\r\n elif isinstance(prescaled_loss, tuple) or isinstance(prescaled_loss, list):\r\n scaled_loss = []\r\n for l in prescaled_loss:\r\n if isinstance(l, torch.Tensor):\r\n scaled_loss.append(l / self.gradient_accumulation_steps())\r\n else:\r\n scaled_loss.append(l)\r\n else:\r\n scaled_loss = prescaled_loss\r\n if self.warn_unscaled_loss:\r\n logger.warning(\r\n f'DeepSpeed unable to scale loss because of type: {type(prescaled_loss)}'\r\n )\r\n self.warn_unscaled_loss = False\r\n\r\n return scaled_loss\r\n```\r\n\r\nit's scaled by gradient acc steps and not scaling factor.",
"I spent some more time running tests, including fp16, and I can't find any problem with the current code. \r\n\r\nAs posted in a comment above reviewing the code shows that it only scales by grad acc steps. "
] | 1,622 | 1,651 | 1,622 | COLLABORATOR | null | # What does this PR do?
In the deepspeed integration inside `Trainer`, the loss currently reported is the scaled loss, as in scaled by the loss scaling factor used during mixed precision training. This has nothing to do with the actual loss (the scaling factor is the highest possible value that does not make the gradients overflow basically), this PR fixes that.
Fixes #11919 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11974/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11974/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11974",
"html_url": "https://github.com/huggingface/transformers/pull/11974",
"diff_url": "https://github.com/huggingface/transformers/pull/11974.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11974.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11973 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11973/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11973/comments | https://api.github.com/repos/huggingface/transformers/issues/11973/events | https://github.com/huggingface/transformers/pull/11973 | 908,266,650 | MDExOlB1bGxSZXF1ZXN0NjU4OTgzNDk0 | 11,973 | typo correction | {
"login": "JminJ",
"id": 51041861,
"node_id": "MDQ6VXNlcjUxMDQxODYx",
"avatar_url": "https://avatars.githubusercontent.com/u/51041861?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JminJ",
"html_url": "https://github.com/JminJ",
"followers_url": "https://api.github.com/users/JminJ/followers",
"following_url": "https://api.github.com/users/JminJ/following{/other_user}",
"gists_url": "https://api.github.com/users/JminJ/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JminJ/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JminJ/subscriptions",
"organizations_url": "https://api.github.com/users/JminJ/orgs",
"repos_url": "https://api.github.com/users/JminJ/repos",
"events_url": "https://api.github.com/users/JminJ/events{/privacy}",
"received_events_url": "https://api.github.com/users/JminJ/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I modified wrong words in src/transforers/generation_utils.py (trhe -> the)"
] | 1,622 | 1,622 | 1,622 | CONTRIBUTOR | null | I modified wrong word in 772 line of src/transformers/generation_utils.py (trhe -> the) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11973/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11973/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11973",
"html_url": "https://github.com/huggingface/transformers/pull/11973",
"diff_url": "https://github.com/huggingface/transformers/pull/11973.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11973.patch",
"merged_at": 1622564699000
} |
https://api.github.com/repos/huggingface/transformers/issues/11972 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11972/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11972/comments | https://api.github.com/repos/huggingface/transformers/issues/11972/events | https://github.com/huggingface/transformers/issues/11972 | 908,246,165 | MDU6SXNzdWU5MDgyNDYxNjU= | 11,972 | RuntimeError: Overflow when unpacking long during training the model | {
"login": "SAIVENKATARAJU",
"id": 46083296,
"node_id": "MDQ6VXNlcjQ2MDgzMjk2",
"avatar_url": "https://avatars.githubusercontent.com/u/46083296?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SAIVENKATARAJU",
"html_url": "https://github.com/SAIVENKATARAJU",
"followers_url": "https://api.github.com/users/SAIVENKATARAJU/followers",
"following_url": "https://api.github.com/users/SAIVENKATARAJU/following{/other_user}",
"gists_url": "https://api.github.com/users/SAIVENKATARAJU/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SAIVENKATARAJU/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SAIVENKATARAJU/subscriptions",
"organizations_url": "https://api.github.com/users/SAIVENKATARAJU/orgs",
"repos_url": "https://api.github.com/users/SAIVENKATARAJU/repos",
"events_url": "https://api.github.com/users/SAIVENKATARAJU/events{/privacy}",
"received_events_url": "https://api.github.com/users/SAIVENKATARAJU/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Hi,\r\n\r\nI am using transformers version 4.0.0 and pytorch version 1.6.0. I am getting the same error.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"I am running transformer 4.14.0 and still have the exact same error randomly during training.",
"In order to get help faster, please also include all that is asked in the issue template, with the model, dataset used, all software versions as prompted by the template. Thanks!"
] | 1,622 | 1,640 | 1,628 | NONE | null | Hi I am training the model for custom dataset for QnA task. I have transformers version 4.0.0 and pytorch version 1.7.1. with the following code, I am getting the issue.
```
trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
# evaluation dataset
)
trainer.train()
```
Error is below:
```
RuntimeError Traceback (most recent call last)
<ipython-input-16-3435b262f1ae> in <module>
----> 1 trainer.train()
~/.local/lib/python3.7/site-packages/transformers/trainer.py in train(self, model_path, trial)
727 self.control = self.callback_handler.on_epoch_begin(self.args, self.state, self.control)
728
--> 729 for step, inputs in enumerate(epoch_iterator):
730
731 # Skip past any already trained steps if resuming training
~/.local/lib/python3.7/site-packages/torch/utils/data/dataloader.py in __next__(self)
433 if self._sampler_iter is None:
434 self._reset()
--> 435 data = self._next_data()
436 self._num_yielded += 1
437 if self._dataset_kind == _DatasetKind.Iterable and \
~/.local/lib/python3.7/site-packages/torch/utils/data/dataloader.py in _next_data(self)
473 def _next_data(self):
474 index = self._next_index() # may raise StopIteration
--> 475 data = self._dataset_fetcher.fetch(index) # may raise StopIteration
476 if self._pin_memory:
477 data = _utils.pin_memory.pin_memory(data)
~/.local/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index)
42 def fetch(self, possibly_batched_index):
43 if self.auto_collation:
---> 44 data = [self.dataset[idx] for idx in possibly_batched_index]
45 else:
46 data = self.dataset[possibly_batched_index]
~/.local/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py in <listcomp>(.0)
42 def fetch(self, possibly_batched_index):
43 if self.auto_collation:
---> 44 data = [self.dataset[idx] for idx in possibly_batched_index]
45 else:
46 data = self.dataset[possibly_batched_index]
<ipython-input-7-80744e22dabe> in __getitem__(self, idx)
6
7 def __getitem__(self, idx):
----> 8 return {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
9
10 def __len__(self):
<ipython-input-7-80744e22dabe> in <dictcomp>(.0)
6
7 def __getitem__(self, idx):
----> 8 return {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
9
10 def __len__(self):
RuntimeError: Overflow when unpacking long
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11972/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11972/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11971 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11971/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11971/comments | https://api.github.com/repos/huggingface/transformers/issues/11971/events | https://github.com/huggingface/transformers/pull/11971 | 908,149,015 | MDExOlB1bGxSZXF1ZXN0NjU4ODg0MTkw | 11,971 | ByT5 model | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"Thank you so much for adding this model.\r\n\r\nIs anyone else experiencing extremely slow training with it? I get 5 times longer training times on ByT5-large compared to mT5-large.\r\n\r\nIn the paper it's about 20% slower only.\r\n\r\n",
"Hey @ViktorThink,\r\n\r\nCould you maybe make two google colab using `mt5-small` and `byt5-small` that shows the different in training speed? :-)",
"Yes, I did a quick test just forward propagating. Seems like byt5-small takes about 4.5x times longer forward (backwards pass was fast during my tests). I think the simple explanation is that when tokenized based on utf8 instead of tokens, it generates about 4.5x more input tokens. I think I just misunderstood the paper, since it probably measured the speed per token, and not speed for a whole sentence. \r\n\r\nhttps://colab.research.google.com/drive/1Hv8XnggFscgb8M9UIEkkXYllwjFUyOB_?usp=sharing\r\n\r\nUpdate:\r\n\r\nI compared batch_size=1 with batch_size=2, and the models were almost equally fast for batch_size=1, but mT5 was even faster with batch_size=2 than with batch_size=1, while ByT5 was considerably slower. ",
"I also see slow cpu inference - byT5-small has similar speed compared to mt5-xl.\r\nAnd frankly, I do not understand how it can not be the case. The number of tokens is 5X larger in my test, so up to 25X more compute in self-attention. Hidden dimension is 3X bigger in byt5 compare to mt5 (thus FFNs take 9X time more compute, and 45X more compute taking into account the number of tokens ). And everything is also multiplied by the 1.5X numbers of layers in byt5.\r\n\r\nSure the decoding is faster (and maybe for training it is not that obvious), but classification part of Table 10 leaves me puzzled. How can this be only 1.1X slower in terms of examples given at least 1.5X more layers and 3X bigger hidden dim? Is it some TPU magic?",
"I'm also curious now! Gently pinging the author @lintingxue maybe she knows more about it :-)",
"@lintingxue @patrickvonplaten \r\nAlso here is the code I've been using for colab (maybe I did something wrong? - please let me know):\r\n``` python\r\n!pip install git+https://github.com/huggingface/transformers\r\n!pip install pip install sentencepiece\r\nimport transformers\r\nimport torch\r\ntorch.backends.cudnn.benchmark = True\r\nimport time\r\narticle = \"\"\"История «Твиттера» началась в марте 2006 года как научно-исследовательский проект компании Odeo (Сан-Франциско), первоначально для внутреннего использования. Джек Дорси ввёл понятие индивидуального пользования SMS-сервиса для общения с небольшой группой. Первоначально проект задумывался, как возможность ответить на единственный вопрос: «Что ты сейчас делаешь?».\"\"\"\r\ntorch.set_num_threads(1)\r\nfor model_name in ['google/mt5-small','google/mt5-base','google/mt5-large','google/byt5-small','google/byt5-base']:\r\n model = transformers.MT5EncoderModel.from_pretrained(model_name)#.cuda()\r\n tokenizer = transformers.AutoTokenizer.from_pretrained(model_name)\r\n input_ids = tokenizer(article, return_tensors=\"pt\").input_ids#.cuda()\r\n print(input_ids.shape) \r\n for idx in range(5):\r\n with torch.no_grad():\r\n t0=time.time()\r\n outputs = model(input_ids)\r\n #torch.cuda.synchronize()\r\n print(f\"{model_name} {idx} {time.time() - t0} seconds\")\r\n hidden_state = outputs.last_hidden_state \r\n ```\r\n It gets:\r\n ```\r\n ...\r\n torch.Size([1, 98])\r\ngoogle/mt5-small 0 0.07804465293884277 seconds\r\ngoogle/mt5-small 1 0.07803463935852051 seconds\r\ngoogle/mt5-small 2 0.08781981468200684 seconds\r\ngoogle/mt5-small 3 0.08014941215515137 seconds\r\ngoogle/mt5-small 4 0.07851195335388184 seconds\r\n...\r\ntorch.Size([1, 98])\r\ngoogle/mt5-base 0 0.31621241569519043 seconds\r\ngoogle/mt5-base 1 0.3263530731201172 seconds\r\ngoogle/mt5-base 2 0.32642054557800293 seconds\r\ngoogle/mt5-base 3 0.3143343925476074 seconds\r\ngoogle/mt5-base 4 0.32720518112182617 seconds\r\n...\r\ntorch.Size([1, 98])\r\ngoogle/mt5-large 0 1.1833469867706299 seconds\r\ngoogle/mt5-large 1 1.160696268081665 seconds\r\ngoogle/mt5-large 2 1.1483042240142822 seconds\r\ngoogle/mt5-large 3 1.1961536407470703 seconds\r\n...\r\ntorch.Size([1, 663])\r\ngoogle/byt5-small 0 4.315548419952393 seconds\r\ngoogle/byt5-small 1 4.416741371154785 seconds\r\ngoogle/byt5-small 2 4.385504722595215 seconds\r\ngoogle/byt5-small 3 4.426936149597168 seconds\r\n...\r\ntorch.Size([1, 663])\r\ngoogle/byt5-base 0 8.502674579620361 seconds\r\ngoogle/byt5-base 1 8.467743635177612 seconds\r\ngoogle/byt5-base 2 8.519198656082153 seconds\r\ngoogle/byt5-base 3 8.492923974990845 seconds\r\ngoogle/byt5-base 4 8.318963766098022 seconds\r\n",
"Btw, the ByT5-xxl model in huggingface doesn't have a pytorch_model.bin so it's not possible to load, is it because it's new and will be added shortly?",
"byt5-xl is also not working (`pytorch_model.bin` is not correctly uploaded) btw. :)",
"thanks for letting me know guys - willl correct this!",
"Weights should be correctly uploaded now :-) ",
"@patrickvonplaten the tokenizer also seems to be super slow.\r\nAnybody else with such an experience? @stefan-it @ViktorThink ",
"@PhilipMay - yes this doesn't surprise me tbh. I implemented the tokenizer in a way that fits the libraries' tokenizer design, which is by no means optimal in terms of speed. I think we should aim for a much faster Rust-backed tokenizer here - could we maybe re-use the fast Reformer tokenizer which also works on chars? cc @Narsil "
] | 1,622 | 1,627 | 1,622 | MEMBER | null | ## ByT5
- Code: https://github.com/google-research/byt5
- Paper: https://arxiv.org/abs/2105.13626
- Twitter: https://twitter.com/colinraffel/status/1399525871678103552
- Ported checkpoints: https://huggingface.co/models?filter=arxiv:1907.06292
This model only requires a new tokenizer (and a tiny change to TFT5). New tokenizer does not require a vocab file as model just uses raw bytes. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11971/reactions",
"total_count": 4,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11971/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11971",
"html_url": "https://github.com/huggingface/transformers/pull/11971",
"diff_url": "https://github.com/huggingface/transformers/pull/11971.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11971.patch",
"merged_at": 1622570857000
} |
https://api.github.com/repos/huggingface/transformers/issues/11970 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11970/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11970/comments | https://api.github.com/repos/huggingface/transformers/issues/11970/events | https://github.com/huggingface/transformers/issues/11970 | 907,953,059 | MDU6SXNzdWU5MDc5NTMwNTk= | 11,970 | Saving and loading a model does not work | {
"login": "bipinkc19",
"id": 41479925,
"node_id": "MDQ6VXNlcjQxNDc5OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/41479925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bipinkc19",
"html_url": "https://github.com/bipinkc19",
"followers_url": "https://api.github.com/users/bipinkc19/followers",
"following_url": "https://api.github.com/users/bipinkc19/following{/other_user}",
"gists_url": "https://api.github.com/users/bipinkc19/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bipinkc19/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bipinkc19/subscriptions",
"organizations_url": "https://api.github.com/users/bipinkc19/orgs",
"repos_url": "https://api.github.com/users/bipinkc19/repos",
"events_url": "https://api.github.com/users/bipinkc19/events{/privacy}",
"received_events_url": "https://api.github.com/users/bipinkc19/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Your file is called `pytoch_model.bin` instead of `pytorch_model.bin`.\r\n\r\nYou should use `save_pretrained` :)",
"Sometime I wonder how more stupid I can be.\r\nThanks for the help"
] | 1,622 | 1,622 | 1,622 | NONE | null | 
After pulling a model, tuning it and saving it, it shows this error while loading it back.
I pulled other files except `pytorch_model.bin` from the pretrained model in hugging face

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11970/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11970/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11969 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11969/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11969/comments | https://api.github.com/repos/huggingface/transformers/issues/11969/events | https://github.com/huggingface/transformers/issues/11969 | 907,941,185 | MDU6SXNzdWU5MDc5NDExODU= | 11,969 | run_qa.py for Question and answering doesn't work for SQUAD2 | {
"login": "SAIVENKATARAJU",
"id": 46083296,
"node_id": "MDQ6VXNlcjQ2MDgzMjk2",
"avatar_url": "https://avatars.githubusercontent.com/u/46083296?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SAIVENKATARAJU",
"html_url": "https://github.com/SAIVENKATARAJU",
"followers_url": "https://api.github.com/users/SAIVENKATARAJU/followers",
"following_url": "https://api.github.com/users/SAIVENKATARAJU/following{/other_user}",
"gists_url": "https://api.github.com/users/SAIVENKATARAJU/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SAIVENKATARAJU/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SAIVENKATARAJU/subscriptions",
"organizations_url": "https://api.github.com/users/SAIVENKATARAJU/orgs",
"repos_url": "https://api.github.com/users/SAIVENKATARAJU/repos",
"events_url": "https://api.github.com/users/SAIVENKATARAJU/events{/privacy}",
"received_events_url": "https://api.github.com/users/SAIVENKATARAJU/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes, the script is an example for squad, not an app that works on any data. You will need to adapt the preprocessing steps to your dataset or change your dataset to be formatted exactly like squad.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,622 | 1,625 | 1,625 | NONE | null | Hi I am using custom data set for fine tune QnA. When I am trying to train the run_qa.py for my dataset. preprocessing is not working. My dataset is look like this.
```
!python /home/jupyter/Project/transformers/examples/pytorch/question-answering/run_qa.py \
--model_name_or_path deepset/bert-large-uncased-whole-word-masking-squad2 \
--train_file /home/jupyter/Project/QnAwatersoftner/squad/train-v2.0.json \
--do_train \
--per_device_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir ./Exp_model/ \
--version_2_with_negative
```


Can you please help me with this. Thanks
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11969/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11969/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11968 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11968/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11968/comments | https://api.github.com/repos/huggingface/transformers/issues/11968/events | https://github.com/huggingface/transformers/pull/11968 | 907,738,332 | MDExOlB1bGxSZXF1ZXN0NjU4NTM2ODc2 | 11,968 | [Pipelines] Extend pipelines to handle multiple possible AutoModel classes | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Re:\r\n> Also, IIRC, config.architectures was not added until some point in transformers. Do we have any way to check that we're not breaking legacy models ? (Scanning the hub is my best guess)\r\n\r\nAgree that we should be careful here. I'm scanning the hub now to check, but I'm pretty sure that 99% of models have `config.architectures`. Even the very old models like `gpt2` have `config.architectures = [...]` saved. Also, instead of raising an error if `config.architectures` doesn't exist, we could just pick the first element of the tuple -> this way we ensure that we cannot break anything that worked previously. What do you think?\r\n\r\nRe:\r\n\r\n> However, I have a feeling we're adding another extra layer of complexity.\r\n> \r\n> Couldn't we use this PR, to simplify the overall logic here. Maybe None could become an empty tuple.\r\n> Single class could become 1-tuple.\r\n> \r\n> Overall the rest of the flow should be more streamlined, don't you think ?\r\n\r\nAgree that we are adding more complexity, but I don't really see how to allow multiple auto classes without adding more complexity. I don't really see how forcing everything to be in the `tuple` format will help here. But keen to hear your proposition on how this could reduce overall complexity!"
] | 1,622 | 1,622 | 1,622 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR extends `pipeline` to better handle multiple auto model classes per pipeline.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11968/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/11968/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11968",
"html_url": "https://github.com/huggingface/transformers/pull/11968",
"diff_url": "https://github.com/huggingface/transformers/pull/11968.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11968.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11967 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11967/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11967/comments | https://api.github.com/repos/huggingface/transformers/issues/11967/events | https://github.com/huggingface/transformers/pull/11967 | 907,693,081 | MDExOlB1bGxSZXF1ZXN0NjU4NDk5NTY3 | 11,967 | Flax Big Bird | {
"login": "thevasudevgupta",
"id": 53136577,
"node_id": "MDQ6VXNlcjUzMTM2NTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/53136577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thevasudevgupta",
"html_url": "https://github.com/thevasudevgupta",
"followers_url": "https://api.github.com/users/thevasudevgupta/followers",
"following_url": "https://api.github.com/users/thevasudevgupta/following{/other_user}",
"gists_url": "https://api.github.com/users/thevasudevgupta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thevasudevgupta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thevasudevgupta/subscriptions",
"organizations_url": "https://api.github.com/users/thevasudevgupta/orgs",
"repos_url": "https://api.github.com/users/thevasudevgupta/repos",
"events_url": "https://api.github.com/users/thevasudevgupta/events{/privacy}",
"received_events_url": "https://api.github.com/users/thevasudevgupta/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2934977194,
"node_id": "MDU6TGFiZWwyOTM0OTc3MTk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Flax",
"name": "Flax",
"color": "4862AD",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Wow awesome work! I think the important next step would be to make the test:\r\n\r\n```\r\ntests/test_modeling_flax_bigbird.py::FlaxBigBirdModelTest::test_jit_compilation\r\n```\r\n\r\nIf this test works, we can enable super fast training on TPU :-) ",
"This test is passing for all model classes except for `FlaxBigBirdForMultipleChoice` and models are already `jit` compatible. It's failing since there is some bug in `FlaxBertForMultipleChoice` (BERT) which doesn't allow it to work for `seqlen > 1`.",
"> This test is passing for all model classes except for `FlaxBigBirdForMultipleChoice` and models are already `jit` compatible. It's failing since there is some bug in `FlaxBertForMultipleChoice` (BERT) which doesn't allow it to work for `seqlen > 1`.\r\n\r\nI'm sorry - I don't follow here 100%. Running `RUN_PT_FLAX_CROSS_TESTS=1 pytest tests/test_modeling_flax_bert.py` passes, so `FlaxBertForMultipleChoice` seems to work correctly. Can you maybe open an issue showing the problem with `FlaxBertForMultipleChoice`? ",
"Hey Vasu, \r\n\r\ncould you remove the `.ipynb` debugger file? :-)",
"done.",
"@LysandreJik @sgugger @stas00 \r\n\r\nThe jitted-Flax tests are getting too expensive to be run at every commit:\r\n```\r\n423.98s call tests/test_modeling_flax_big_bird.py::FlaxBigBirdModelTest::test_jit_compilation\r\n64.09s call tests/test_modeling_flax_bert.py::FlaxBertModelTest::test_jit_compilation\r\n51.94s call tests/test_modeling_flax_electra.py::FlaxElectraModelTest::test_jit_compilation\r\n43.62s call tests/test_modeling_flax_roberta.py::FlaxRobertaModelTest::test_jit_compilation\r\n37.21s call tests/test_modeling_flax_big_bird.py::FlaxBigBirdModelTest::test_hidden_states_output\r\n29.00s call tests/test_modeling_flax_gpt2.py::FlaxGPT2ModelTest::test_greedy_generate\r\n28.08s call tests/test_modeling_flax_big_bird.py::FlaxBigBirdModelTest::test_model_outputs_equivalence\r\n27.02s call tests/test_modeling_flax_gpt2.py::FlaxGPT2ModelTest::test_sample_generate_logits_warper\r\n25.48s call tests/test_modeling_flax_bert.py::FlaxBertModelTest::test_attention_outputs\r\n25.26s call tests/test_modeling_flax_gpt2.py::FlaxGPT2ModelTest::test_sample_generate\r\n25.10s call tests/test_modeling_flax_gpt2.py::FlaxGPT2ModelTest::test_greedy_generate_attn_mask\r\n24.77s call tests/test_modeling_flax_gpt2.py::FlaxGPT2ModelTest::test_sample_generate_attn_mask\r\n23.91s call tests/test_modeling_flax_electra.py::FlaxElectraModelTest::test_attention_outputs\r\n23.50s call tests/test_modeling_flax_clip.py::FlaxCLIPModelTest::test_jit_compilation\r\n20.20s call tests/test_modeling_flax_roberta.py::FlaxRobertaModelTest::test_attention_outputs\r\n16.70s call tests/test_modeling_flax_bert.py::FlaxBertModelTest::test_model_outputs_equivalence\r\n15.46s call tests/test_modeling_flax_gpt2.py::FlaxGPT2ModelTest::test_jit_compilation\r\n13.64s call tests/test_modeling_flax_clip.py::FlaxCLIPVisionModelTest::test_jit_compilation\r\n12.68s call tests/test_modeling_flax_bert.py::FlaxBertModelTest::test_hidden_states_output\r\n12.43s call tests/test_modeling_flax_electra.py::FlaxElectraModelTest::test_model_outputs_equivalence\r\n12.05s call tests/test_tokenization_mbart50.py::MBartTokenizationTest::test_save_pretrained\r\n11.96s call tests/test_modeling_flax_big_bird.py::FlaxBigBirdModelTest::test_forward_signature\r\n11.88s call tests/test_modeling_flax_roberta.py::FlaxRobertaModelTest::test_model_outputs_equivalence\r\n11.63s call tests/test_modeling_flax_clip.py::FlaxCLIPVisionModelTest::test_attention_outputs\r\n11.11s call tests/test_modeling_flax_clip.py::FlaxCLIPModelTest::test_get_image_features\r\n```\r\n\r\nHowever they are super important to ensure that the model works on TPU. Can we somehow run them only on approval or it's probably easier to just set them to \"slow\" for now?",
"@vasudevgupta7 could you also run `make style` one last time? :-)",
"> The jitted-Flax tests are getting too expensive to be run at every commit:\r\n[...]\r\n> \r\n> However they are super important to ensure that the model works on TPU. Can we somehow run them only on approval or it's probably easier to just set them to \"slow\" for now?\r\n\r\nWhat you're saying is that the TPU tests won't be then run at all, because we don't have a TPU runner for slow tests, correct?\r\n\r\nI think Circle CI has a mechanism where you can trigger certain runs by adding a special keyword to the commit message. But that might be too complicated to remember to do.\r\n\r\nHow about this idea. Leave an open PR with a circle-ci jitted-flax tests job that only exists in this PR, and which gets rebased on a nightly basis through a cron-job and pushed, that would give a poor man's scheduled CI run on TPU. Perhaps there are some easier ways.",
"We can mark them as slow if you want but there's no Flax GPU CI right now, so the slow tests won't be run :) @stas00's proposal sounds good!",
"tests failing on CircleCI are unrelated to this PR.",
"Flax tests are disabled now - opening a PR that will run them as proposed by @stas00 ",
"Merging - great job @vasudevgupta7 "
] | 1,622 | 1,623 | 1,623 | CONTRIBUTOR | null | # What does this PR do?
--------------------------------------------------------------------
**🚨 Bug detection 🚨**
`BigBirdForMultipleChoice` was incorrect and is corrected in this PR. This is a breaking change for all BigBird models that have been trained on multiple choice (0 on the hub currently)
--------------------------------------------------------------------
This PR will add `FlaxBigBirdModel`.
Evaluation Notebook: https://colab.research.google.com/drive/1rx_G9awurQekrK1mSzd3A9F_UciTTuTY?usp=sharing#scrollTo=ecjNtnAuKYo8
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11967/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11967/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11967",
"html_url": "https://github.com/huggingface/transformers/pull/11967",
"diff_url": "https://github.com/huggingface/transformers/pull/11967.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11967.patch",
"merged_at": 1623697263000
} |
https://api.github.com/repos/huggingface/transformers/issues/11966 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11966/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11966/comments | https://api.github.com/repos/huggingface/transformers/issues/11966/events | https://github.com/huggingface/transformers/pull/11966 | 907,662,148 | MDExOlB1bGxSZXF1ZXN0NjU4NDczNzI3 | 11,966 | [DeepSpeed] decouple `DeepSpeedConfigHF` from `Trainer` | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thank you for the great feedback and suggestions, Sylvain.\r\n\r\nOn to the next PR to move the code and docs.",
"@stas00 I have a question about this PR. My use case is HF trainer + DeepSpeed + hyperparameter search.\r\n\r\nIn **training_arg.py**: hf_deepspeed_config is first loaded from config file (e.g. zero3.config) as HfTrainerDeepSpeedConfig, and then is adjusted with TrainingArguments values. In particular, all \"auto\" in zero3.config would be resolved to actual integral or floating values.\r\n\r\n self.hf_deepspeed_config = HfTrainerDeepSpeedConfig(self.deepspeed)\r\n self.hf_deepspeed_config.trainer_config_process(self)\r\n\r\nIn **trainer.py -> _hp_search_setup()**: hf_deepspeed_config is reset by zero3.config to HfDeepSpeedConfig, different from HfTrainerDeepSpeedConfig, and thus \"auto\" values are remained unchanged as string type, not integral or floating type. \r\n\r\n from transformers.deepspeed import HfDeepSpeedConfig\r\n self.args.hf_deepspeed_config = HfDeepSpeedConfig(self.args.deepspeed)\r\n\r\nThis didn't work for hyperparameter search because DS cannot be initialized with \"auto\" values. The error messages are attached below. I feel _hp_search_setup() should do exact the same thing as in training_args.py to reset hf_deepspeed_config as HfTrainerDeepSpeedConfig and resolve all \"auto\" values, but I am not sure what was the reason you chose the other way in this PR.\r\n\r\n```\r\n File \"/home/meiyang/src/transformers_fork/src/transformers/integrations.py\", line 164, in run_hp_search_optuna\r\n study.optimize(_objective, n_trials=n_trials, timeout=timeout, n_jobs=n_jobs)\r\n File \"/home/meiyang/bin/miniconda3/envs/gptj/lib/python3.8/site-packages/optuna/study/study.py\", line 400, in optimize\r\n _optimize(\r\n File \"/home/meiyang/bin/miniconda3/envs/gptj/lib/python3.8/site-packages/optuna/study/_optimize.py\", line 66, in _optimize\r\n _optimize_sequential(\r\n File \"/home/meiyang/bin/miniconda3/envs/gptj/lib/python3.8/site-packages/optuna/study/_optimize.py\", line 163, in _optimize_sequential\r\n trial = _run_trial(study, func, catch)\r\n File \"/home/meiyang/bin/miniconda3/envs/gptj/lib/python3.8/site-packages/optuna/study/_optimize.py\", line 264, in _run_trial\r\n raise func_err\r\n File \"/home/meiyang/bin/miniconda3/envs/gptj/lib/python3.8/site-packages/optuna/study/_optimize.py\", line 213, in _run_trial\r\n value_or_values = func(trial)\r\n File \"/home/meiyang/src/transformers_fork/src/transformers/integrations.py\", line 154, in _objective\r\n trainer.train(resume_from_checkpoint=checkpoint, trial=trial)\r\n File \"/home/meiyang/src/transformers_fork/src/transformers/trainer.py\", line 1155, in train\r\n self.model = self.call_model_init(trial)\r\n File \"/home/meiyang/src/transformers_fork/src/transformers/trainer.py\", line 1019, in call_model_init\r\n model = self.model_init()\r\n File \"run_clm_local.py\", line 289, in model_init\r\n pretrained_model = AutoModelForCausalLM.from_pretrained(\r\n File \"/home/meiyang/src/transformers_fork/src/transformers/models/auto/auto_factory.py\", line 447, in from_pretrained\r\n return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)\r\n File \"/home/meiyang/src/transformers_fork/src/transformers/modeling_utils.py\", line 1488, in from_pretrained\r\n with deepspeed.zero.Init(config_dict_or_path=deepspeed_config()):\r\n File \"/home/meiyang/src/deepspeed/deepspeed/runtime/zero/partition_parameters.py\", line 461, in __init__\r\n _ds_config = DeepSpeedConfig(config_dict_or_path,\r\n File \"/home/meiyang/src/deepspeed/deepspeed/runtime/config.py\", line 873, in __init__\r\n self._configure_train_batch_size()\r\n File \"/home/meiyang/src/deepspeed/deepspeed/runtime/config.py\", line 1050, in _configure_train_batch_size\r\n self._batch_assertion()\r\n **File \"/home/meiyang/src/deepspeed/deepspeed/runtime/config.py\", line 986, in _batch_assertion\r\n train_batch > 0\r\nTypeError: '>' not supported between instances of 'str' and 'int'**\r\n\r\n```",
"Honestly, I have never used `_hp_search_setup()` and have no idea what it does, so it's very possible the DS integration doesn't support it at the moment as indicated by your report. \r\n\r\nDo you want to try and make it work and make a PR if you succeed? \r\n\r\nI'm not exactly sure what you mean by:\r\n\r\n> but I am not sure what was the reason you chose the other way in this PR.\r\n\r\nbut perhaps it'd be much simpler for you to code what you think it should be and then we can look together at what you meant.\r\n\r\nHow does that sound?",
"Sure. I am working on it. After replacing to HFTrainerDeepSpeedConfig, I was able to pass DS initialization. But there are other issues for model init. Not sure if they’re DS related or not. Will continue looking tomorrow. ",
"Thank you for working on this, @dunalduck0 - I trust that you will figure it out.\r\n\r\nWhile you work on this please log all the steps so that we can reproduce the process and we will need to create tests to verify the workings of this path based on your logs as we currently don't have any tests exercising this path with deepspeed. ",
"Hi stas00, I wanted to update this thread a bit. To my question on Mar 3rd, I think the change below is good enough to make sure DeepSpeed configuration is loaded properly. . \r\n\r\n\r\nBut the ultimate goal, to use hyperparameter search feature in Transformers with DeepSpeed, is still blocking (see [discussion thread](https://discuss.huggingface.co/t/using-hyperparameter-search-in-trainer/785/10) and [blog](https://huggingface.co/blog/ray-tune) to learn this feature). The difficulty is how to use an \"used\" DeepSpeed engine, as in the problems you addressed last month: https://github.com/microsoft/DeepSpeed/issues/1748\r\n\r\nThe difference in my case is I don't need to save/load_checkpoint. For hyperparameters search, both trainer and DeepSpeed engine need to be reused in multiple trial trainings, to discover the optimal settings. I was managed to run trial 1 (the 1st time the DeepSpeed engine is used in training) and then trial 2 failed inside of DeepSpeed with \"index out of range\" error. It looks like some partition went wrong, but I am not sure. I wonder if you have reached any good solution with DeepSpeed team on this type of issues?",
"Some improvements have been made recently in Deepspeed, but last I tried to re-use the engine it still didn't work in all situations. You're of course welcome to open and issue at Deepspeed and ask for a better engine reuse. If you don't ask other priorities will take over.\r\n\r\nBut I'm sure we can find a workaround with what we have.\r\n\r\nBottom line, I need to study how this feature works, write a simple example that deploys this feature, then add DS support, then turn it into a test.\r\n\r\nThank you for the links with the usage examples. \r\n\r\nI'm currently very busy with the BigScience 176B model training launch, but hopefully next week I should have some time to tinker with this. Unless of course, you or someone else beats me to it ;)",
"@dunalduck0, my apologies for taking forever to attend to this feature request. \r\n\r\nPlease try this PR https://github.com/huggingface/transformers/pull/16740 and let me know if it addressed your need.\r\n\r\nI only added a basic test, so if you're doing something specific that happens not to work please let me know and I will extend the test to include it."
] | 1,622 | 1,649 | 1,622 | CONTRIBUTOR | null | As requested in https://github.com/huggingface/transformers/issues/11954 this PR
* uncouples `DeepSpeedConfigHF` from `Trainer` so one can activate `zero.Init()` in `modeling_utils.py` and several other places w/o needing to rely on the HF `Trainer`.
* adds a new `LoggingLevel` ctx manager to `testing_utils.py`
* adds a new test testing `DeepSpeedConfigHF` decoupled from the `Trainer`.
* starts a new doc
* well, through the PR things got renamed too, see the final diff for all the changes.
The plan is to merge this, then move all of the Deepspeed integration code into its own `src/transformers/deepspeed.py` and the docs into `docs/source/main_classes/deepspeed.rst`, since the integration has now outgrown the Trainer alone.
Fixes: https://github.com/huggingface/transformers/issues/11954
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11966/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11966/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11966",
"html_url": "https://github.com/huggingface/transformers/pull/11966",
"diff_url": "https://github.com/huggingface/transformers/pull/11966.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11966.patch",
"merged_at": 1622579093000
} |
https://api.github.com/repos/huggingface/transformers/issues/11965 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11965/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11965/comments | https://api.github.com/repos/huggingface/transformers/issues/11965/events | https://github.com/huggingface/transformers/issues/11965 | 907,654,746 | MDU6SXNzdWU5MDc2NTQ3NDY= | 11,965 | Reproducibility Questions | {
"login": "XuhuiZhou",
"id": 20436061,
"node_id": "MDQ6VXNlcjIwNDM2MDYx",
"avatar_url": "https://avatars.githubusercontent.com/u/20436061?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/XuhuiZhou",
"html_url": "https://github.com/XuhuiZhou",
"followers_url": "https://api.github.com/users/XuhuiZhou/followers",
"following_url": "https://api.github.com/users/XuhuiZhou/following{/other_user}",
"gists_url": "https://api.github.com/users/XuhuiZhou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/XuhuiZhou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/XuhuiZhou/subscriptions",
"organizations_url": "https://api.github.com/users/XuhuiZhou/orgs",
"repos_url": "https://api.github.com/users/XuhuiZhou/repos",
"events_url": "https://api.github.com/users/XuhuiZhou/events{/privacy}",
"received_events_url": "https://api.github.com/users/XuhuiZhou/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"If you are using any of the example scripts, they set the seed before instantiating the model for full reproducibility. If not, you should do that in your script by using the `set_seed` function you can import from the library, or by using the `model_init` function to initialize your model (the `Trainer` will set the seed before calling it).",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,622 | 1,625 | 1,625 | NONE | null | When initializing task-specific models from pre-trained language models, we see something like:
`Some weights of XLMRobertaForQuestionAnswering were not initialized from the model checkpoint at xlm-roberta-large and are ne
wly initialized: ['qa_outputs.weight', 'qa_outputs.bias']`
It seems that the initialization is model-dependant, however, I wonder whether this behaviour is controlled by the random seed that we set in the `training_args.py` `seed` hyper-parameter?
If not, I assume we should the following? But how? Are there any examples?
```
To ensure reproducibility across runs, use the: func:`~transformers.Trainer.model_init` function to instantiate the model if it has some randomly initialized parameters.
```
@sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11965/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11965/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11964 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11964/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11964/comments | https://api.github.com/repos/huggingface/transformers/issues/11964/events | https://github.com/huggingface/transformers/pull/11964 | 907,615,574 | MDExOlB1bGxSZXF1ZXN0NjU4NDM1NDMz | 11,964 | Fix weight decay masking in `run_flax_glue.py` | {
"login": "n2cholas",
"id": 12474257,
"node_id": "MDQ6VXNlcjEyNDc0MjU3",
"avatar_url": "https://avatars.githubusercontent.com/u/12474257?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/n2cholas",
"html_url": "https://github.com/n2cholas",
"followers_url": "https://api.github.com/users/n2cholas/followers",
"following_url": "https://api.github.com/users/n2cholas/following{/other_user}",
"gists_url": "https://api.github.com/users/n2cholas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/n2cholas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/n2cholas/subscriptions",
"organizations_url": "https://api.github.com/users/n2cholas/orgs",
"repos_url": "https://api.github.com/users/n2cholas/repos",
"events_url": "https://api.github.com/users/n2cholas/events{/privacy}",
"received_events_url": "https://api.github.com/users/n2cholas/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2934977194,
"node_id": "MDU6TGFiZWwyOTM0OTc3MTk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Flax",
"name": "Flax",
"color": "4862AD",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"That's great - thanks a lot for the fix @n2cholas :-) I'll re-run the FlaxGlue suit today with your fix to check if results improve ",
"Thanks a lot for the fix @n2cholas - I reran the eval train+eval script & updated the results accordingly"
] | 1,622 | 1,622 | 1,622 | CONTRIBUTOR | null | # What does this PR do?
Fixes #11936
In addition to the changes discussed in the issue, I combined `traverse` and `decay_path` into one function `decay_mask_fn` to simplify the implementation.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11964/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11964/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11964",
"html_url": "https://github.com/huggingface/transformers/pull/11964",
"diff_url": "https://github.com/huggingface/transformers/pull/11964.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11964.patch",
"merged_at": 1622716526000
} |
https://api.github.com/repos/huggingface/transformers/issues/11963 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11963/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11963/comments | https://api.github.com/repos/huggingface/transformers/issues/11963/events | https://github.com/huggingface/transformers/issues/11963 | 907,610,545 | MDU6SXNzdWU5MDc2MTA1NDU= | 11,963 | How to achive character lvl tokenization? (cant convert from huggingface/tokenizers) | {
"login": "hadaev8",
"id": 20247085,
"node_id": "MDQ6VXNlcjIwMjQ3MDg1",
"avatar_url": "https://avatars.githubusercontent.com/u/20247085?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hadaev8",
"html_url": "https://github.com/hadaev8",
"followers_url": "https://api.github.com/users/hadaev8/followers",
"following_url": "https://api.github.com/users/hadaev8/following{/other_user}",
"gists_url": "https://api.github.com/users/hadaev8/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hadaev8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hadaev8/subscriptions",
"organizations_url": "https://api.github.com/users/hadaev8/orgs",
"repos_url": "https://api.github.com/users/hadaev8/repos",
"events_url": "https://api.github.com/users/hadaev8/events{/privacy}",
"received_events_url": "https://api.github.com/users/hadaev8/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"I just can not figure out why two api can not be the same. Extra Burden for users"
] | 1,622 | 1,627 | 1,625 | CONTRIBUTOR | null | Initially, I thought that huggingface/tokenizers is the same thing as tokenization in this repo.
I made it like this:
```
from tokenizers import Tokenizer, models, pre_tokenizers
from tokenizers.processors import TemplateProcessing
tokenizer = Tokenizer(models.WordLevel(unk_token='[UNK]'))
tokenizer.pre_tokenizer = pre_tokenizers.Split("", "isolated")
trainer = tokenizer.model.get_trainer()
trainer.vocab_size = 100
trainer.special_tokens = ["[UNK]", "[PAD]", "[SOS]", "[SEP]", "[EOS]"]
tokenizer.train(files=["alchemist.txt"], trainer=trainer)
tokenizer.post_processor = TemplateProcessing(
single="[SOS] $A [EOS]",
pair="[SOS] $A [SEP] $B:1 [EOS]:1",
special_tokens=[
("[SOS]", 2),
("[SEP]", 3),
("[EOS]", 4),
],
)
tokenizer.enable_padding(pad_id=1, pad_token="[PAD]")
```
Still, huggingface/tokenizers lack some features I want like returning tensors.
So I tried to convert it to transformers tokenizer as suggested here
https://github.com/huggingface/tokenizers/issues/669#issuecomment-828864108
But got error:
```
/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)
1631 FutureWarning,
1632 )
-> 1633 file_id = list(cls.vocab_files_names.keys())[0]
1634 vocab_files[file_id] = pretrained_model_name_or_path
1635 else:
IndexError: list index out of range
```
So, is tokenizer from tokenizers + converting to transformer tokenizer is a best way to achieve what I want?
If so, how should I convert it? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11963/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11963/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11962 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11962/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11962/comments | https://api.github.com/repos/huggingface/transformers/issues/11962/events | https://github.com/huggingface/transformers/pull/11962 | 907,534,836 | MDExOlB1bGxSZXF1ZXN0NjU4MzY4MDk1 | 11,962 | [RAG] Fix rag from pretrained question encoder generator behavior | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,622 | 1,622 | 1,622 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #11303
The new code allows to pass model specific parameters to `from_pretrained_...` which will correctly change the config.
Also `*model_kwargs` is deleted from `from_pretrained_question_encoder_generator` is it cannot be used (impossible to check whether args correspond to question encoder or generator).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11962/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11962/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11962",
"html_url": "https://github.com/huggingface/transformers/pull/11962",
"diff_url": "https://github.com/huggingface/transformers/pull/11962.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11962.patch",
"merged_at": 1622621834000
} |
https://api.github.com/repos/huggingface/transformers/issues/11961 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11961/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11961/comments | https://api.github.com/repos/huggingface/transformers/issues/11961/events | https://github.com/huggingface/transformers/pull/11961 | 907,411,509 | MDExOlB1bGxSZXF1ZXN0NjU4MjYzNDg3 | 11,961 | Add MT5ForConditionalGeneration as supported arch. to summarization README | {
"login": "PhilipMay",
"id": 229382,
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilipMay",
"html_url": "https://github.com/PhilipMay",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks a lot for fixing this!\r\n\r\nCould you also update [translation readme](https://github.com/huggingface/transformers/blob/master/examples/pytorch/translation/README.md) ?",
"> Could you also update [translation readme](https://github.com/huggingface/transformers/blob/master/examples/pytorch/translation/README.md) ?\r\n\r\nDone."
] | 1,622 | 1,622 | 1,622 | CONTRIBUTOR | null | see #11960 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11961/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11961/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11961",
"html_url": "https://github.com/huggingface/transformers/pull/11961",
"diff_url": "https://github.com/huggingface/transformers/pull/11961.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11961.patch",
"merged_at": 1622476473000
} |
https://api.github.com/repos/huggingface/transformers/issues/11960 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11960/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11960/comments | https://api.github.com/repos/huggingface/transformers/issues/11960/events | https://github.com/huggingface/transformers/issues/11960 | 907,399,573 | MDU6SXNzdWU5MDczOTk1NzM= | 11,960 | Summarization also supports MT5ForConditionalGeneration | {
"login": "PhilipMay",
"id": 229382,
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilipMay",
"html_url": "https://github.com/PhilipMay",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"And @patil-suraj",
"Yes, you are right!\r\n\r\nWe could also add it in the[ translation readme](https://github.com/huggingface/transformers/blob/master/examples/pytorch/translation/README.md) as well.\r\n\r\nFeel free to open a PR :)",
"> Feel free to open a PR :)\r\n\r\nsee #11961\r\n\r\n",
"Fixed by #11961"
] | 1,622 | 1,622 | 1,622 | CONTRIBUTOR | null | The [README.md](https://github.com/huggingface/transformers/blob/master/examples/pytorch/summarization/README.md) of the summarization examples sais it supports `T5ForConditionalGeneration` IMO `MT5ForConditionalGeneration` should be added - right?
Tagging @sgugger and @sshleifer ... | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11960/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11960/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11959 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11959/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11959/comments | https://api.github.com/repos/huggingface/transformers/issues/11959/events | https://github.com/huggingface/transformers/issues/11959 | 907,269,735 | MDU6SXNzdWU5MDcyNjk3MzU= | 11,959 | Add new token to pretrained GPT2 tokenizer | {
"login": "sooftware",
"id": 42150335,
"node_id": "MDQ6VXNlcjQyMTUwMzM1",
"avatar_url": "https://avatars.githubusercontent.com/u/42150335?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sooftware",
"html_url": "https://github.com/sooftware",
"followers_url": "https://api.github.com/users/sooftware/followers",
"following_url": "https://api.github.com/users/sooftware/following{/other_user}",
"gists_url": "https://api.github.com/users/sooftware/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sooftware/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sooftware/subscriptions",
"organizations_url": "https://api.github.com/users/sooftware/orgs",
"repos_url": "https://api.github.com/users/sooftware/repos",
"events_url": "https://api.github.com/users/sooftware/events{/privacy}",
"received_events_url": "https://api.github.com/users/sooftware/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I will use \"EleutherAI/gpt-neo-1.3B\" tokenizer. Is there a ",
"This should help you out: https://huggingface.co/transformers/internal/tokenization_utils.html?highlight=add_tokens#transformers.tokenization_utils_base.SpecialTokensMixin.add_tokens",
"Thank you! @LysandreJik "
] | 1,622 | 1,622 | 1,622 | NONE | null | Hi! Thank you for awesome project!
I want to add several tokens to pre-trained GPT2 tokenizer.
Can I? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11959/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11959/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11958 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11958/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11958/comments | https://api.github.com/repos/huggingface/transformers/issues/11958/events | https://github.com/huggingface/transformers/issues/11958 | 907,258,642 | MDU6SXNzdWU5MDcyNTg2NDI= | 11,958 | Issue: IndexError: "Index out of range in self" when generating translations with MarianMTModel | {
"login": "DidiDerDenker",
"id": 31280364,
"node_id": "MDQ6VXNlcjMxMjgwMzY0",
"avatar_url": "https://avatars.githubusercontent.com/u/31280364?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DidiDerDenker",
"html_url": "https://github.com/DidiDerDenker",
"followers_url": "https://api.github.com/users/DidiDerDenker/followers",
"following_url": "https://api.github.com/users/DidiDerDenker/following{/other_user}",
"gists_url": "https://api.github.com/users/DidiDerDenker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DidiDerDenker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DidiDerDenker/subscriptions",
"organizations_url": "https://api.github.com/users/DidiDerDenker/orgs",
"repos_url": "https://api.github.com/users/DidiDerDenker/repos",
"events_url": "https://api.github.com/users/DidiDerDenker/events{/privacy}",
"received_events_url": "https://api.github.com/users/DidiDerDenker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, this could be because some example might have seq length greater than `max_length` supported by the model. For marin max_length is 1024. You could pass `truncation=True` to `tokenizer` so it'll truncate the text if it's greater than model's max length.",
"Thanks for your quick reply. That has solved my problem :)"
] | 1,622 | 1,622 | 1,622 | NONE | null | ## Environment info
- transformers version: 4.5.1
- Python version: Python 3.7
- Using GPU in script? Yes
### Who can help
- marian: @patrickvonplaten, @patil-suraj
- text generation: @patrickvonplaten
## Information
I am currently trying to use MarianMTModel to translate text from English to German. When generating the translation, an error occurs (code and error message below).
## To reproduce
I am using Google Colab.
```ruby
%%capture
!pip install datasets==1.6.2
!pip install transformers==4.5.1
!pip install SentencePiece
import datasets
import tensorflow_datasets as tfds
import pandas as pd
from transformers import MarianMTModel, MarianTokenizer
train_data, train_info = tfds.load("cnn_dailymail", split="train[:85%]", with_info=True)
val_data, val_info = tfds.load("cnn_dailymail", split="validation[:10%]", with_info=True)
test_data, test_info = tfds.load("cnn_dailymail", split="test[:5%]", with_info=True)
df_train = tfds.as_dataframe(train_data, train_info)
df_val = tfds.as_dataframe(val_data, val_info)
df_test = tfds.as_dataframe(test_data, test_info)
df_train = tfds.as_dataframe(train_data.take(100), train_info)
df_val = tfds.as_dataframe(val_data.take(100), val_info)
df_test = tfds.as_dataframe(test_data.take(100), test_info)
name = "Helsinki-NLP/opus-mt-en-de"
tokenizer = MarianTokenizer.from_pretrained(name)
model = MarianMTModel.from_pretrained(name)
model.resize_token_embeddings(len(tokenizer))
def translate_dataframe(df):
corpus_text = []
corpus_summary = []
for index, row in df.iterrows():
translated = model.generate(**tokenizer(row["article"], return_tensors="pt", padding=True))
decoded = [tokenizer.decode(token, skip_special_tokens=True) for token in translated]
corpus_text.append(decoded)
translated = model.generate(**tokenizer(row["highlights"], return_tensors="pt", padding=True))
decoded = [tokenizer.decode(token, skip_special_tokens=True) for token in translated]
corpus_summary.append(decoded)
df = pd.DataFrame({"article": corpus_text, "highlights": corpus_summary})
return df
df_train = translate_dataframe(df_train)
df_val = translate_dataframe(df_val)
df_test = translate_dataframe(df_test)
```
Error message:
```ruby
IndexError Traceback (most recent call last)
<ipython-input-8-879e52d92ecf> in <module>()
24
25
---> 26 df_train = translate_dataframe(df_train)
27 df_val = translate_dataframe(df_val)
28 df_test = translate_dataframe(df_test)
10 frames
<ipython-input-8-879e52d92ecf> in translate_dataframe(df)
11
12 for index, row in df.iterrows():
---> 13 translated = model.generate(**tokenizer(row["article"], return_tensors="pt", padding=True))
14 decoded = [tokenizer.decode(token, skip_special_tokens=True) for token in translated]
15 corpus_text.append(decoded)
/usr/local/lib/python3.7/dist-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs)
25 def decorate_context(*args, **kwargs):
26 with self.__class__():
---> 27 return func(*args, **kwargs)
28 return cast(F, decorate_context)
29
/usr/local/lib/python3.7/dist-packages/transformers/generation_utils.py in generate(self, input_ids, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, encoder_no_repeat_ngram_size, num_return_sequences, max_time, decoder_start_token_id, use_cache, num_beam_groups, diversity_penalty, prefix_allowed_tokens_fn, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, forced_bos_token_id, forced_eos_token_id, remove_invalid_values, **model_kwargs)
925 if self.config.is_encoder_decoder:
926 # add encoder_outputs to model_kwargs
--> 927 model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(input_ids, model_kwargs)
928
929 # set input_ids as decoder_input_ids
/usr/local/lib/python3.7/dist-packages/transformers/generation_utils.py in _prepare_encoder_decoder_kwargs_for_generation(self, input_ids, model_kwargs)
410 argument: value for argument, value in model_kwargs.items() if not argument.startswith("decoder_")
411 }
--> 412 model_kwargs["encoder_outputs"]: ModelOutput = encoder(input_ids, return_dict=True, **encoder_kwargs)
413 return model_kwargs
414
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
/usr/local/lib/python3.7/dist-packages/transformers/models/marian/modeling_marian.py in forward(self, input_ids, attention_mask, head_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict)
722 inputs_embeds = self.embed_tokens(input_ids) * self.embed_scale
723
--> 724 embed_pos = self.embed_positions(input_shape)
725
726 hidden_states = inputs_embeds + embed_pos
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
/usr/local/lib/python3.7/dist-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs)
25 def decorate_context(*args, **kwargs):
26 with self.__class__():
---> 27 return func(*args, **kwargs)
28 return cast(F, decorate_context)
29
/usr/local/lib/python3.7/dist-packages/transformers/models/marian/modeling_marian.py in forward(self, input_ids_shape, past_key_values_length)
137 past_key_values_length, past_key_values_length + seq_len, dtype=torch.long, device=self.weight.device
138 )
--> 139 return super().forward(positions)
140
141
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/sparse.py in forward(self, input)
156 return F.embedding(
157 input, self.weight, self.padding_idx, self.max_norm,
--> 158 self.norm_type, self.scale_grad_by_freq, self.sparse)
159
160 def extra_repr(self) -> str:
/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1914 # remove once script supports set_grad_enabled
1915 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1916 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1917
1918
IndexError: index out of range in self
```
## Expected behavior
I expect this model to generate translations without running into this error. Could you give me some tips on how to fix this error or what is wrong in general? Thanks! :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11958/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11958/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11957 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11957/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11957/comments | https://api.github.com/repos/huggingface/transformers/issues/11957/events | https://github.com/huggingface/transformers/issues/11957 | 907,247,106 | MDU6SXNzdWU5MDcyNDcxMDY= | 11,957 | Byt5 | {
"login": "flozi00",
"id": 47894090,
"node_id": "MDQ6VXNlcjQ3ODk0MDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flozi00",
"html_url": "https://github.com/flozi00",
"followers_url": "https://api.github.com/users/flozi00/followers",
"following_url": "https://api.github.com/users/flozi00/following{/other_user}",
"gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flozi00/subscriptions",
"organizations_url": "https://api.github.com/users/flozi00/orgs",
"repos_url": "https://api.github.com/users/flozi00/repos",
"events_url": "https://api.github.com/users/flozi00/events{/privacy}",
"received_events_url": "https://api.github.com/users/flozi00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"hopefully getting closed by https://github.com/huggingface/transformers/pull/11971"
] | 1,622 | 1,622 | 1,622 | CONTRIBUTOR | null | # 🌟 New model addition
## Model description
Tokenizer free Version of mt5
<!-- Important information -->
## Open source status
* [X] the model implementation is available: (give details)
* https://github.com/google-research/byt5
* [X] the model weights are available: (give details)
* https://github.com/google-research/byt5
* [ ] who are the authors: (mention them, if possible by @gh-username)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11957/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11957/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11956 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11956/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11956/comments | https://api.github.com/repos/huggingface/transformers/issues/11956/events | https://github.com/huggingface/transformers/pull/11956 | 907,196,016 | MDExOlB1bGxSZXF1ZXN0NjU4MDc4OTc4 | 11,956 | Authorize args when instantiating an AutoModel | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@LysandreJik thanks for this!\r\n\r\nYeah IMO it'd be great if the instantiation of `AutoModel`s from a configuration follows a similar signature as that of instantiation of regular model classes from a configuration, just from a consistency standpoint because `AutoModel`s are already effectively treated as models via `modeling_auto.py` just like any other model."
] | 1,622 | 1,622 | 1,622 | MEMBER | null | The current `_BaseAutoModelClass` class initialization does not accept any argument, and therefore fails with an arcane error when instantiating it incorrectly, as shown in https://github.com/huggingface/transformers/issues/11953 by @g-karthik:
```py
from transformers import AutoConfig, AutoModelForCausalLM
config = AutoConfig.from_pretrained("gpt2", return_dict=True, gradient_checkpointing=False)
model = AutoModelForCausalLM(config)
```
```out
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: __init__() takes 1 positional argument but 2 were given
```
This PR adds possible arguments and keyword arguments so that the error is always correctly raised:
```out
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/home/xxx/transformers/src/transformers/models/auto/auto_factory.py", line 361, in __init__
raise EnvironmentError(
OSError: AutoModel is designed to be instantiated using the `AutoModel.from_pretrained(pretrained_model_name_or_path)` or `AutoModel.from_config(config)` methods.
```
Taking this opportunity to re-open the question asked by @g-karthik of whether the `AutoModel`s should have the ability to be instantiated using configuration objects via the `__init__`, similarly to other `PreTrainedModel`s.
@patrickvonplaten @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11956/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11956/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11956",
"html_url": "https://github.com/huggingface/transformers/pull/11956",
"diff_url": "https://github.com/huggingface/transformers/pull/11956.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11956.patch",
"merged_at": 1622554074000
} |
https://api.github.com/repos/huggingface/transformers/issues/11955 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11955/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11955/comments | https://api.github.com/repos/huggingface/transformers/issues/11955/events | https://github.com/huggingface/transformers/issues/11955 | 907,080,566 | MDU6SXNzdWU5MDcwODA1NjY= | 11,955 | Killed Message | {
"login": "adnan-fakahr-pk-90",
"id": 72121641,
"node_id": "MDQ6VXNlcjcyMTIxNjQx",
"avatar_url": "https://avatars.githubusercontent.com/u/72121641?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adnan-fakahr-pk-90",
"html_url": "https://github.com/adnan-fakahr-pk-90",
"followers_url": "https://api.github.com/users/adnan-fakahr-pk-90/followers",
"following_url": "https://api.github.com/users/adnan-fakahr-pk-90/following{/other_user}",
"gists_url": "https://api.github.com/users/adnan-fakahr-pk-90/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adnan-fakahr-pk-90/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adnan-fakahr-pk-90/subscriptions",
"organizations_url": "https://api.github.com/users/adnan-fakahr-pk-90/orgs",
"repos_url": "https://api.github.com/users/adnan-fakahr-pk-90/repos",
"events_url": "https://api.github.com/users/adnan-fakahr-pk-90/events{/privacy}",
"received_events_url": "https://api.github.com/users/adnan-fakahr-pk-90/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The `EleutherAI/gpt-neo-2.7B` is a large model. Loading it in memory takes more than 10GB. \r\nDo you have the same results when trying to use the 1.3B or the 125M variants?",
"> The `EleutherAI/gpt-neo-2.7B` is a large model. Loading it in memory takes more than 10GB.\r\n> Do you have the same results when trying to use the 1.3B or the 125M variants?\r\n\r\nyes I did with 2.7 and then 1.3 both have the same result....\r\nThen how we use it, if a person has 8GB or 16GB memory installed in a PC...\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,622 | 1,625 | 1,625 | NONE | null | Describe the bug
When I run this command
generator = pipeline('text-generation', model='EleutherAI/gpt-neo-2.7B')
or even download the model in a folder like this
generator = pipeline('text-generation', model=neo-models/')
It is not loading and produce the result as "Killed" text.
which usually means "out of memory"
even though I have nothing loaded except pycharm GUI
I have tested this on Ubuntu and Centos Server. Same result
below is the whole code:
import gc
import os
from transformers import pipeline
import torch
gc.collect()
print("================1")
**generator = pipeline('text-generation', model='EleutherAI/gpt-neo-2.7B') ====> here comes the KILLED**
#generator = pipeline('text-generation', model='neo-models/'')
print("================2")
prompt = "what is the meaning of life"
res = generator(prompt, max_length=50, do_sample=True, Temperature=0.9)
print("================")
print(res) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11955/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11955/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11954 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11954/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11954/comments | https://api.github.com/repos/huggingface/transformers/issues/11954/events | https://github.com/huggingface/transformers/issues/11954 | 907,023,267 | MDU6SXNzdWU5MDcwMjMyNjc= | 11,954 | Uncoupling ZeRO-3 weak ref bridge b/w Trainer and modeling_utils | {
"login": "g-karthik",
"id": 3851993,
"node_id": "MDQ6VXNlcjM4NTE5OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3851993?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/g-karthik",
"html_url": "https://github.com/g-karthik",
"followers_url": "https://api.github.com/users/g-karthik/followers",
"following_url": "https://api.github.com/users/g-karthik/following{/other_user}",
"gists_url": "https://api.github.com/users/g-karthik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/g-karthik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/g-karthik/subscriptions",
"organizations_url": "https://api.github.com/users/g-karthik/orgs",
"repos_url": "https://api.github.com/users/g-karthik/repos",
"events_url": "https://api.github.com/users/g-karthik/events{/privacy}",
"received_events_url": "https://api.github.com/users/g-karthik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Good call, @g-karthik!\r\n\r\nPlease see the decoupled PR here: https://github.com/huggingface/transformers/pull/11966\r\n\r\nNow you can just do:\r\n```\r\nfrom transformers.integrations import HfDeepSpeedConfig\r\ndsc = HfDeepSpeedConfigHF(ds_config)\r\nmodel = AutoModel.from_pretrained(name)\r\n```\r\nand it'll just work, w/o needing the Trainer.\r\n\r\nPlease let me know if it works for you. \r\n\r\n> Is it not possible to allow Init() args to come from the parent from_pretrained() method's kwargs?\r\n\r\nI can't see how this would work, since there several other core functions which rely on `is_deepspeed_zero3_enabled` and there is no way to pass this argument to those. That's why the \"environmental\" approach, rather than passing args.\r\n\r\nThe only other way I can see this solved is by tapping into the `model.config` object, but then it'll require a ton of code changes - e.g. all examples.",
"PR merged - I updated the comment above to reflect the final new name.\r\n\r\nI will now work on updating the docs so it's all clear."
] | 1,622 | 1,622 | 1,622 | NONE | null | https://github.com/huggingface/transformers/blob/fd6204b2a70d100800cb259a7fbddfc812631ed3/src/transformers/modeling_utils.py#L1168
So I briefly looked through the ZeRO-3 integration into the `Trainer` and this approach of creating a "bridge" between the `Trainer` and `modeling_utils` via a weak ref is neat.
However, what if I wanted to use ZeRO-3 with HF models outside the scope of the `Trainer` and `TrainingArguments`? Seems like I cannot at the moment.
There seems to be 4 places in `modeling_utils` where `is_deepspeed_zero3_enabled` is called, of which the only one that seems to be heavily tied to the custom `DeepSpeedConfigHF` class is the one referenced above.
Is it not possible to allow `Init()` args to come from the parent `from_pretrained()` method's `kwargs`?
@stas00 @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11954/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11954/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11953 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11953/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11953/comments | https://api.github.com/repos/huggingface/transformers/issues/11953/events | https://github.com/huggingface/transformers/issues/11953 | 906,927,554 | MDU6SXNzdWU5MDY5Mjc1NTQ= | 11,953 | AutoModel abstraction fails for pre-training initialization | {
"login": "g-karthik",
"id": 3851993,
"node_id": "MDQ6VXNlcjM4NTE5OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3851993?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/g-karthik",
"html_url": "https://github.com/g-karthik",
"followers_url": "https://api.github.com/users/g-karthik/followers",
"following_url": "https://api.github.com/users/g-karthik/following{/other_user}",
"gists_url": "https://api.github.com/users/g-karthik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/g-karthik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/g-karthik/subscriptions",
"organizations_url": "https://api.github.com/users/g-karthik/orgs",
"repos_url": "https://api.github.com/users/g-karthik/repos",
"events_url": "https://api.github.com/users/g-karthik/events{/privacy}",
"received_events_url": "https://api.github.com/users/g-karthik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! We recommend you read the [docs regarding the `AutoModel`](https://huggingface.co/transformers/model_doc/auto.html#transformers.AutoModel.from_config). I have linked you the `from_config` method which should be used in this use case.",
"However, it is indeed unexpected for you to receive this error message. The message should be more explicit, investigating now.",
"Opened #11956 for a more explicit error, and opening your use case for discussion."
] | 1,622 | 1,622 | 1,622 | NONE | null | ## Environment info
- `transformers` version: 4.5.1
- Python version: 3.6
- PyTorch version: 1.4+
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten, @LysandreJik
## Information
Model I am using: GPT-2
The problem arises when using:
* [Y] my own modified scripts: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
from transformers import AutoConfig, AutoModelForCausalLM, GPT2LMHeadModel
config = AutoConfig.from_pretrained("gpt2", return_dict=True, gradient_checkpointing=False)
model_class = GPT2LMHeadModel
model = model_class(config) # WORKS FINE
model_class = AutoModelForCausalLM
model = model_class(config) # FAILS, stack trace below
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: __init__() takes 1 positional argument but 2 were given
```
## Expected behavior
Both cases should work fine. The latter case should pull the former class internally. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11953/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11953/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11952 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11952/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11952/comments | https://api.github.com/repos/huggingface/transformers/issues/11952/events | https://github.com/huggingface/transformers/issues/11952 | 906,818,938 | MDU6SXNzdWU5MDY4MTg5Mzg= | 11,952 | TypeError: __init__() got an unexpected keyword argument 'force_bos_token_to_be_generated' | {
"login": "t4khosu",
"id": 28463194,
"node_id": "MDQ6VXNlcjI4NDYzMTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/28463194?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/t4khosu",
"html_url": "https://github.com/t4khosu",
"followers_url": "https://api.github.com/users/t4khosu/followers",
"following_url": "https://api.github.com/users/t4khosu/following{/other_user}",
"gists_url": "https://api.github.com/users/t4khosu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/t4khosu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/t4khosu/subscriptions",
"organizations_url": "https://api.github.com/users/t4khosu/orgs",
"repos_url": "https://api.github.com/users/t4khosu/repos",
"events_url": "https://api.github.com/users/t4khosu/events{/privacy}",
"received_events_url": "https://api.github.com/users/t4khosu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi there,\r\n\r\n`force_bos_token_to_be_generated` is now depricated, instead you could use `forced_bos_token_id` argument, which should be set to the token id that needs to be forced as first token.",
"Thanks a lot for your fast reply!\r\n\r\nAs suggested, I used the BART tokenizer to find out the bos_token ID and now it works perfectly fine (:",
"> Hi there,\r\n> \r\n> `force_bos_token_to_be_generated` is now depricated, instead you could use `forced_bos_token_id` argument, which should be set to the token id that needs to be forced as first token.\r\n\r\nHello @patil-suraj. \r\n\r\nI ran the following code in Colab and it worked but could you confirm that corresponds to what you wrote? Thanks.\r\n\r\n```\r\nfrom transformers import BartForConditionalGeneration, BartTokenizer\r\n\r\nmodel = BartForConditionalGeneration.from_pretrained(\"facebook/bart-large\")\r\ntok = BartTokenizer.from_pretrained(\"facebook/bart-large\")\r\n\r\nexample_english_phrase = \"UN Chief Says There Is No <mask> in Syria\"\r\nbatch = tokenizer(example_english_phrase, return_tensors='pt')\r\n\r\ngenerated_ids = model.generate(batch['input_ids'], forced_bos_token_id = batch['input_ids'][0][0])\r\ntok.batch_decode(generated_ids, skip_special_tokens=True)[0] \r\n# assert tok.batch_decode(generated_ids, skip_special_tokens=True) == ['UN Chief Says There Is No Plan to Stop Chemical Weapons in Syria']\r\n```",
"Hi @piegu \r\n\r\nIt's not necessary to use `forced_bos_token_id ` with `facebook/bart-large`, it's only needed for bart-cnn models ",
"> It's not necessary to use `forced_bos_token_id ` with `facebook/bart-large`, it's only needed for bart-cnn models\r\n\r\nHello @patil-suraj,\r\n\r\nI'm not sure to understand your answer. \r\n\r\n1. If you run [my code](https://github.com/huggingface/transformers/issues/11952#issuecomment-923264808) about `facebook/bart-large` with `forced_bos_token_id`, you get a clear output: \r\n`UN Chief Says There Is No Plan to Stop Chemical Weapons in Syria`\r\n2. If you run it without (`generated_ids = model.generate(batch['input_ids']`), you get this: `UNALSO SEE`\r\n3. There is clearly a difference that shows that `forced_bos_token_id` has an impact with `facebook/bart-large`, no?\r\n4. bart-cnn models are finetuned model for summarization, no? (like `https://huggingface.co/ainize/bart-base-cnn`). How do you use `forced_bos_token_id` with them?\r\n4. I think the HF doc is not updated about this: https://huggingface.co/transformers/model_doc/bart.html#mask-filling\r\n\r\n**Note**: just to give an overview of this discussion, I'm researching the right code to get the BART, mBART, and MBART-50 language models making multiple token masks (ie writing zero or more tokens in the output sentence when there is a `<mask>` token in the input one) with the objective to get the full output sentence."
] | 1,622 | 1,632 | 1,622 | NONE | null | ## Environment info
- `transformers` version: 4.6.1
- Platform: Linux-3.10.0-1160.15.2.el7.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@patrickvonplaten, @patil-suraj
## Information
The model I am using is BART
The problem arises when using:
* [x] the official example scripts: (https://huggingface.co/transformers/model_doc/bart.html)
## To reproduce
Steps to reproduce the behavior:
1. Install transformers library
2. run the following code-snipped as presented in the official example:
```
from transformers import BartForConditionalGeneration, BartTokenizer
model = BartForConditionalGeneration.from_pretrained("facebook/bart-large", force_bos_token_to_be_generated=True)
```
3. Receive error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-21-216ff3421f95> in <module>
1 from transformers import BartForConditionalGeneration, BartTokenizer
----> 2 model = BartForConditionalGeneration.from_pretrained("facebook/bart-large", force_bos_token_to_be_generated=True)
3 tok = BartTokenizer.from_pretrained("facebook/bart-large")
4 example_english_phrase = "UN Chief Says There Is No <mask> in Syria"
5 batch = tok(example_english_phrase, return_tensors='pt')
~/.conda/envs/groundwork/lib/python3.8/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
1171 else:
1172 with no_init_weights(_enable=_fast_init):
-> 1173 model = cls(config, *model_args, **model_kwargs)
1174
1175 if from_tf:
TypeError: __init__() got an unexpected keyword argument 'force_bos_token_to_be_generated'
```
## Expected behavior
I expect the code to not raise an exception and that the final assertion is true.
If there are more information needed, please let me know.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11952/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11952/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11951 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11951/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11951/comments | https://api.github.com/repos/huggingface/transformers/issues/11951/events | https://github.com/huggingface/transformers/pull/11951 | 906,784,688 | MDExOlB1bGxSZXF1ZXN0NjU3NzI0MjUx | 11,951 | [Flax] Adding Visual-Transformer | {
"login": "jayendra13",
"id": 651057,
"node_id": "MDQ6VXNlcjY1MTA1Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/651057?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jayendra13",
"html_url": "https://github.com/jayendra13",
"followers_url": "https://api.github.com/users/jayendra13/followers",
"following_url": "https://api.github.com/users/jayendra13/following{/other_user}",
"gists_url": "https://api.github.com/users/jayendra13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jayendra13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jayendra13/subscriptions",
"organizations_url": "https://api.github.com/users/jayendra13/orgs",
"repos_url": "https://api.github.com/users/jayendra13/repos",
"events_url": "https://api.github.com/users/jayendra13/events{/privacy}",
"received_events_url": "https://api.github.com/users/jayendra13/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2934977194,
"node_id": "MDU6TGFiZWwyOTM0OTc3MTk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Flax",
"name": "Flax",
"color": "4862AD",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Awesome! \r\nThe equivalence tests are still failing, from CI logs\r\n```\r\nFAILED tests/test_modeling_flax_vit.py::FlaxViTModelTest::test_equivalence_flax_to_pt\r\nFAILED tests/test_modeling_flax_vit.py::FlaxViTModelTest::test_equivalence_pt_to_flax\r\n```\r\n\r\nCould you fix these, otherwise lmk and I will take care of it :) ",
"sorry having a busy week, will back to this on the weekend :weary: "
] | 1,622 | 1,623 | 1,623 | CONTRIBUTOR | null | # What does this PR do?
This PR adds the `ViT` model in JAX/Flax
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #11948
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11951/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11951/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11951",
"html_url": "https://github.com/huggingface/transformers/pull/11951",
"diff_url": "https://github.com/huggingface/transformers/pull/11951.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11951.patch",
"merged_at": 1623340033000
} |
https://api.github.com/repos/huggingface/transformers/issues/11950 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11950/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11950/comments | https://api.github.com/repos/huggingface/transformers/issues/11950/events | https://github.com/huggingface/transformers/issues/11950 | 906,780,710 | MDU6SXNzdWU5MDY3ODA3MTA= | 11,950 | ValueError: You have to specify either decoder_inputs or decoder_inputs_embeds | {
"login": "puraminy",
"id": 5293185,
"node_id": "MDQ6VXNlcjUyOTMxODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5293185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/puraminy",
"html_url": "https://github.com/puraminy",
"followers_url": "https://api.github.com/users/puraminy/followers",
"following_url": "https://api.github.com/users/puraminy/following{/other_user}",
"gists_url": "https://api.github.com/users/puraminy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/puraminy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/puraminy/subscriptions",
"organizations_url": "https://api.github.com/users/puraminy/orgs",
"repos_url": "https://api.github.com/users/puraminy/repos",
"events_url": "https://api.github.com/users/puraminy/events{/privacy}",
"received_events_url": "https://api.github.com/users/puraminy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi there, \r\n\r\nthe [summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization) and [translation](https://github.com/huggingface/transformers/tree/master/examples/pytorch/translation) examples supports fine-tuning T5 and mT5 (and other seq2seq models in the lib). Please take a look at the readme and the script.\r\n\r\nThe scripts are easily modifiable to support training on any seq2seq task.\r\n\r\nAlso there are multiple notebook on T5 training in [community notebooks ](https://huggingface.co/transformers/community.html#community-notebooks)section. Hope that helps.",
"Thank you very much! I haven't found the first examples which are up to date too.\r\n\r\nI found community notebooks later, some of them are old. Maybe a recent one in the main notebooks is a good idea.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,622 | 1,625 | 1,625 | NONE | null | Why there is no training example for T5 or MT5???
Could you please give me a link to an example? I had a hard time to write a code with various errors:
This is my code:
```
import torch
from datasets import load_dataset
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
#from transformers import MT5Model, T5Tokenizer
from transformers import MT5ForConditionalGeneration, T5Tokenizer
#tokenizer = AutoTokenizer.from_pretrained("google/mt5-base")
tokenizer = T5Tokenizer.from_pretrained("google/mt5-small")
raw_datasets = load_dataset("atomic")
def tokenize_function(examples):
return tokenizer(examples["event"],max_length=128, padding="max_length", truncation=True)
def tokenize_labels(examples):
with tokenizer.as_target_tokenizer():
return tokenizer(examples["oReact"], return_tensors="pt")
tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)
#labels = raw_datasets.map(tokenize_labels, batched=True)
small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000))
small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000))
full_train_dataset = tokenized_datasets["train"]
full_eval_dataset = tokenized_datasets["test"]
from transformers import TrainingArguments
training_args = TrainingArguments("test_trainer")
#model = AutoModelForSeq2SeqLM.from_pretrained("google/mt5-base")
model = MT5ForConditionalGeneration.from_pretrained("google/mt5-small", torchscript = True)
#traced_model = torch.jit.trace(model, (input_ids, attention_mask, decoder_input_ids))
from transformers import Trainer
trainer = Trainer(
model=model, args=training_args, train_dataset=small_train_dataset, eval_dataset=small_eval_dataset
)
trainer.train()
import numpy as np
from datasets import load_metric
metric = load_metric("accuracy")
def compute_metrics(eval_pred):
logits, labels = eval_pred
predictions = np.argmax(logits, axis=-1)
return metric.compute(predictions=predictions, references=labels)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=small_train_dataset,
eval_dataset=small_eval_dataset,
compute_metrics=compute_metrics,
)
trainer.evaluate()
```
I don't know how to feed the labels to this model...
And this is error:
```
...
tr_loss += self.training_step(model, inputs)
File "/home/pouramini/miniconda3/lib/python3.7/site-packages/transformers/trainer.py", line 1250, in training_step
loss = self.compute_loss(model, inputs)
File "/home/pouramini/miniconda3/lib/python3.7/site-packages/transformers/trainer.py", line 1277, in compute_loss
outputs = model(**inputs)
File "/home/pouramini/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/pouramini/miniconda3/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py", line 1510, in forward
return_dict=return_dict,
File "/home/pouramini/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/pouramini/miniconda3/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py", line 871, in forward
raise ValueError(f"You have to specify either {err_msg_prefix}inputs or {err_msg_prefix}inputs_embeds")
```
@sgugger @patrickvonplaten @patil-suraj | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11950/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11950/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11949 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11949/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11949/comments | https://api.github.com/repos/huggingface/transformers/issues/11949/events | https://github.com/huggingface/transformers/issues/11949 | 906,760,401 | MDU6SXNzdWU5MDY3NjA0MDE= | 11,949 | report_to flag does not work with TFTrainer | {
"login": "tomy0000000",
"id": 23290356,
"node_id": "MDQ6VXNlcjIzMjkwMzU2",
"avatar_url": "https://avatars.githubusercontent.com/u/23290356?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tomy0000000",
"html_url": "https://github.com/tomy0000000",
"followers_url": "https://api.github.com/users/tomy0000000/followers",
"following_url": "https://api.github.com/users/tomy0000000/following{/other_user}",
"gists_url": "https://api.github.com/users/tomy0000000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tomy0000000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tomy0000000/subscriptions",
"organizations_url": "https://api.github.com/users/tomy0000000/orgs",
"repos_url": "https://api.github.com/users/tomy0000000/repos",
"events_url": "https://api.github.com/users/tomy0000000/events{/privacy}",
"received_events_url": "https://api.github.com/users/tomy0000000/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"No this is not implemented for the `TFTrainer`. More generally, we are moving away from `TFTrainer` to go to pure Keras for the training loop.",
"Well, does that means `trainer_tf.py` is getting a huge rewrite? If not, I'm willing to work on a PR solving this particular issue. Let me know if anyone have thoughts about how this should be implemented.",
"No it's just going to disappear and we will use the Keras methods (fit etc.) instead."
] | 1,622 | 1,622 | 1,622 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.6.1
- Platform: Linux-4.15.0-142-generic-x86_64-with-glibc2.27
- Python version: 3.8.10
- PyTorch version (GPU?): 1.7.1+cu101 (True)
- Tensorflow version (GPU?): 2.5.0 (False)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help
@sgugger @LysandreJik @Rocketknight1
## Information
The problem arises when using my own modified scripts: [CoLA from BLUE on TF version BERT](https://gist.github.com/tomy0000000/af06394aa00a8b0fffb992e5bf444adf)
My modified script only run the CoLA task, which is minimized from the official [tf_glue.py](https://github.com/huggingface/transformers/blob/master/examples/tensorflow/text-classification/run_tf_glue.py)
I've setup comet.ml and wandb for other project, but don't want to use them in this one.
However `report_to` flag don't seems to work in `TFTrainingArguments`.
More specific in the [trainer_tf.py](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer_tf.py#L113-L127)
## To reproduce
Just run the notebook in jupyter, error pops in the 8th cell when `TFTrainer` is initialized.
Stacktrace:
```python
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-9-308135373fed> in <module>
----> 1 trainer = TFTrainer(
2 model=model,
3 args=TFTrainingArguments(output_dir=".", report_to="tensorboard"),
4 train_dataset=train_dataset,
5 eval_dataset=eval_dataset,
~/.local/lib/python3.8/site-packages/transformers/trainer_tf.py in __init__(self, model, args, train_dataset, eval_dataset, compute_metrics, tb_writer, optimizers)
120
121 if is_comet_available():
--> 122 self.setup_comet()
123 elif os.environ.get("COMET_MODE") != "DISABLED":
124 logger.info(
~/.local/lib/python3.8/site-packages/transformers/trainer_tf.py in setup_comet(self)
274 experiment = None
275 if comet_mode == "ONLINE":
--> 276 experiment = comet_ml.Experiment(**args)
277 logger.info("Automatic Comet.ml online logging enabled")
278 elif comet_mode == "OFFLINE":
~/.local/lib/python3.8/site-packages/comet_ml/__init__.py in __init__(self, api_key, project_name, workspace, log_code, log_graph, auto_param_logging, auto_metric_logging, parse_args, auto_output_logging, log_env_details, log_git_metadata, log_git_patch, disabled, log_env_gpu, log_env_host, display_summary, log_env_cpu, display_summary_level, optimizer_data, auto_weight_logging, auto_log_co2, auto_metric_step_rate, auto_histogram_tensorboard_logging, auto_histogram_epoch_rate, auto_histogram_weight_logging, auto_histogram_gradient_logging, auto_histogram_activation_logging)
239 )
240
--> 241 super(Experiment, self).__init__(
242 project_name=project_name,
243 workspace=workspace,
~/.local/lib/python3.8/site-packages/comet_ml/experiment.py in __init__(self, project_name, workspace, log_code, log_graph, auto_param_logging, auto_metric_logging, parse_args, auto_output_logging, log_env_details, log_git_metadata, log_git_patch, disabled, log_env_gpu, log_env_host, display_summary, log_env_cpu, display_summary_level, optimizer_data, auto_weight_logging, auto_log_co2, auto_metric_step_rate, auto_histogram_tensorboard_logging, auto_histogram_epoch_rate, auto_histogram_weight_logging, auto_histogram_gradient_logging, auto_histogram_activation_logging)
458 ALREADY_IMPORTED_MODULES
459 )
--> 460 raise ImportError(msg)
461
462 # Generate a unique identifier for this experiment.
ImportError: You must import Comet before these modules: tensorflow, torch, tensorboard
```
## Expected behavior
`TFTrainer` should respect the `report_to` arguments provided
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11949/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11949/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11948 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11948/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11948/comments | https://api.github.com/repos/huggingface/transformers/issues/11948/events | https://github.com/huggingface/transformers/issues/11948 | 906,743,976 | MDU6SXNzdWU5MDY3NDM5NzY= | 11,948 | Flax port vision Transformer to flax | {
"login": "jayendra13",
"id": 651057,
"node_id": "MDQ6VXNlcjY1MTA1Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/651057?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jayendra13",
"html_url": "https://github.com/jayendra13",
"followers_url": "https://api.github.com/users/jayendra13/followers",
"following_url": "https://api.github.com/users/jayendra13/following{/other_user}",
"gists_url": "https://api.github.com/users/jayendra13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jayendra13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jayendra13/subscriptions",
"organizations_url": "https://api.github.com/users/jayendra13/orgs",
"repos_url": "https://api.github.com/users/jayendra13/repos",
"events_url": "https://api.github.com/users/jayendra13/events{/privacy}",
"received_events_url": "https://api.github.com/users/jayendra13/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"claiming this issue has started work, @patrickvonplaten @patil-suraj "
] | 1,622 | 1,623 | 1,623 | CONTRIBUTOR | null | Port the existing vision-transformer to flax. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11948/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11948/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11947 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11947/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11947/comments | https://api.github.com/repos/huggingface/transformers/issues/11947/events | https://github.com/huggingface/transformers/issues/11947 | 906,698,868 | MDU6SXNzdWU5MDY2OTg4Njg= | 11,947 | Encoding/decoding NLP model in tensorflow lite (fine-tuned GPT2) | {
"login": "Guillaume-slize",
"id": 14851317,
"node_id": "MDQ6VXNlcjE0ODUxMzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/14851317?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Guillaume-slize",
"html_url": "https://github.com/Guillaume-slize",
"followers_url": "https://api.github.com/users/Guillaume-slize/followers",
"following_url": "https://api.github.com/users/Guillaume-slize/following{/other_user}",
"gists_url": "https://api.github.com/users/Guillaume-slize/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Guillaume-slize/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Guillaume-slize/subscriptions",
"organizations_url": "https://api.github.com/users/Guillaume-slize/orgs",
"repos_url": "https://api.github.com/users/Guillaume-slize/repos",
"events_url": "https://api.github.com/users/Guillaume-slize/events{/privacy}",
"received_events_url": "https://api.github.com/users/Guillaume-slize/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co) instead?\r\n\r\nThanks!",
"Sure,\nThanks for your answer :)\n\nOn Mon, May 31, 2021 at 9:51 AM Lysandre Debut ***@***.***>\nwrote:\n\n> Hello, thanks for opening an issue! We try to keep the github issues for\n> bugs/feature requests.\n> Could you ask your question on the forum <https://discuss.huggingface.co>\n> instead?\n>\n> Thanks!\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/11947#issuecomment-851283473>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ADRJZ5KO6QL4EYWKMDXCXADTQM5WVANCNFSM45ZFOJYQ>\n> .\n>\n\n\n-- \n\nGuillaume Slizewicz\nwww.guillaumeslizewicz.com\n+32 496 53 6666\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,622 | 1,625 | 1,625 | NONE | null | We are in the process of building a small virtual assistant and would like it to be able to run a fine-tuned version of GPT-2 on a raspberry-pi with a coral accelerator.
So far, we managed to convert our model to a tflite and to get first results. We know how to convert from words to indices with the previous tokenizer but then we need a bigger tensor as input to the interpreter. We miss the conversion from indices to tensors. Is there a way to do this simply?
You can find our pseudo-code here, we are stuck at step 2 and 6 :
```
import tensorflow as tf
#Prelude
TF_MODEL_PATH_LITE = "/path/model.tflite"
interpreter = tf.lite.Interpreter(model_path=TF_MODEL_PATH_LITE)
interpreter.allocate_tensors()
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
input_shape = input_details[0]['shape']
#1-Encode input, giving you indices
context_idx = tokenizer.encode("Hello world.", return_tensors = "tf")
#2-How to convert the context_idx to appropriate np.array ?
input_data = np.array(np.random.random_sample(input_shape), dtype=np.int32) #dummy input for now
#3- feed input
interpreter.set_tensor(input_details[0]['index'], input_data)
#4- Run model
interpreter.invoke()
#5- Get output as tensor
output_data = interpreter.get_tensor(output_details[0]['index'])
#6- How decode this np array to idx
output_idx=np.random.randint(100) #dummy for now ...
#7- Decode Output from idx to word
string_tf = tokenizer.decode(output_idx, skip_special_tokens=True)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11947/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11947/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11946 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11946/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11946/comments | https://api.github.com/repos/huggingface/transformers/issues/11946/events | https://github.com/huggingface/transformers/issues/11946 | 906,674,604 | MDU6SXNzdWU5MDY2NzQ2MDQ= | 11,946 | Loading mbart-large-50-one-to-many-mmt is very slow | {
"login": "MK096",
"id": 20142735,
"node_id": "MDQ6VXNlcjIwMTQyNzM1",
"avatar_url": "https://avatars.githubusercontent.com/u/20142735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MK096",
"html_url": "https://github.com/MK096",
"followers_url": "https://api.github.com/users/MK096/followers",
"following_url": "https://api.github.com/users/MK096/following{/other_user}",
"gists_url": "https://api.github.com/users/MK096/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MK096/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MK096/subscriptions",
"organizations_url": "https://api.github.com/users/MK096/orgs",
"repos_url": "https://api.github.com/users/MK096/repos",
"events_url": "https://api.github.com/users/MK096/events{/privacy}",
"received_events_url": "https://api.github.com/users/MK096/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi there, `mbart-50` is actlly a big model and takes a while to load. But 15-20 min seems a lot, it's could probably an issue with your system. You could try to load it using a colab and see how much time it takes.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,622 | 1,625 | 1,625 | NONE | null | Whenever i try to run :
model = MBartForConditionalGeneration.from_pretrained(" [local path]/mbart-large-50-one-to-many-mmt")
My computer ether freezes or it takes 15-20 minutes to load the model.
I am using it for translation
Code: https://huggingface.co/facebook/mbart-large-50-one-to-many-mmt
Any solution fo this?
-Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11946/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11946/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11945 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11945/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11945/comments | https://api.github.com/repos/huggingface/transformers/issues/11945/events | https://github.com/huggingface/transformers/pull/11945 | 906,672,684 | MDExOlB1bGxSZXF1ZXN0NjU3NjMwMTg2 | 11,945 | reinitialize wandb config for each hyperparameter search run | {
"login": "Mindful",
"id": 2897172,
"node_id": "MDQ6VXNlcjI4OTcxNzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2897172?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mindful",
"html_url": "https://github.com/Mindful",
"followers_url": "https://api.github.com/users/Mindful/followers",
"following_url": "https://api.github.com/users/Mindful/following{/other_user}",
"gists_url": "https://api.github.com/users/Mindful/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mindful/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mindful/subscriptions",
"organizations_url": "https://api.github.com/users/Mindful/orgs",
"repos_url": "https://api.github.com/users/Mindful/repos",
"events_url": "https://api.github.com/users/Mindful/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mindful/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I usually do the search directly with wandb sweeps so didn't notice this issue.\r\nLooks good on my side, thanks!"
] | 1,622 | 1,622 | 1,622 | CONTRIBUTOR | null | # What does this PR do?
Fixes #11944
This is the quick/easy fix I'm using to work around the issue locally by just rerunning the WandbCallback integration `setup()` method for each run. This works fine for me, but if for some reason it's not safe/desirable to rerun the `WandbCallback.setup()` please feel free to just close this PR.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Unsure; probably whoever did the wandb integration is best. Otherwise maybe @sgugger because it's Trainer related?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11945/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11945/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11945",
"html_url": "https://github.com/huggingface/transformers/pull/11945",
"diff_url": "https://github.com/huggingface/transformers/pull/11945.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11945.patch",
"merged_at": 1622553513000
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.