url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/9730 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9730/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9730/comments | https://api.github.com/repos/huggingface/transformers/issues/9730/events | https://github.com/huggingface/transformers/issues/9730 | 791,108,250 | MDU6SXNzdWU3OTExMDgyNTA= | 9,730 | Docs suggest to use discriminator weights for ElectraForMaskedLM instead of generator | {
"login": "jbingel",
"id": 10550688,
"node_id": "MDQ6VXNlcjEwNTUwNjg4",
"avatar_url": "https://avatars.githubusercontent.com/u/10550688?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jbingel",
"html_url": "https://github.com/jbingel",
"followers_url": "https://api.github.com/users/jbingel/followers",
"following_url": "https://api.github.com/users/jbingel/following{/other_user}",
"gists_url": "https://api.github.com/users/jbingel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jbingel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jbingel/subscriptions",
"organizations_url": "https://api.github.com/users/jbingel/orgs",
"repos_url": "https://api.github.com/users/jbingel/repos",
"events_url": "https://api.github.com/users/jbingel/events{/privacy}",
"received_events_url": "https://api.github.com/users/jbingel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hello! Indeed, this is a mistake! Do you want to update the docs to show the generator instead?",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,611 | 1,614 | 1,614 | NONE | null | ## Environment info
- `transformers` version: 4.2.2
- Platform: Linux-5.3.0-64-generic-x86_64-with-Ubuntu-19.10-eoan
- Python version: 3.7.5
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@LysandreJik
## Information
Model I am using (Bert, XLNet ...): ELECTRA
The problem arises when using:
* [x] the official example scripts: https://huggingface.co/transformers/model_doc/electra.html#codecell3
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: I want to check if a some word fits into the context of a sentence, for which I use the prediction probability of that word at the MASK position
## To reproduce
Steps to reproduce the behavior:
1. `from transformers import ElectraForMaskedLM`
2. `model = ElectraForMaskedLM.from_pretrained('google/electra-small-discriminator')`
## My issue
I get the following warning:
`Some weights of ElectraForMaskedLM were not initialized from the model checkpoint at google/electra-small-discriminator and are newly initialized: ['generator_predictions.LayerNorm.weight', 'generator_predictions.LayerNorm.bias', 'generator_predictions.dense.weight', 'generator_predictions.dense.bias', 'generator_lm_head.weight', 'generator_lm_head.bias']`
I understand that I'm loading the discriminator weights, whereas ElectraForMaskedLM needs the generator weights for the MLM output. Why do the docs tell me to use the discriminator? Am I missing something?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9730/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9730/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9729 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9729/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9729/comments | https://api.github.com/repos/huggingface/transformers/issues/9729/events | https://github.com/huggingface/transformers/pull/9729 | 791,065,551 | MDExOlB1bGxSZXF1ZXN0NTU5MTgxMDcy | 9,729 | Changing model default for TableQuestionAnsweringPipeline. | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | # What does this PR do?
Niels removed his tapas model from the Hub, so we need to update the default to `google` organization
- Discussion: https://discuss.huggingface.co/t/table-question-answering-is-not-an-available-task-under-pipeline/3284/6
- Had to update the slow test that was out-of-sync I think, @LysandreJik can you confirm ?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@LysandreJik
@thomwolf
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9729/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9729/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9729",
"html_url": "https://github.com/huggingface/transformers/pull/9729",
"diff_url": "https://github.com/huggingface/transformers/pull/9729.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9729.patch",
"merged_at": 1611235912000
} |
https://api.github.com/repos/huggingface/transformers/issues/9728 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9728/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9728/comments | https://api.github.com/repos/huggingface/transformers/issues/9728/events | https://github.com/huggingface/transformers/pull/9728 | 791,010,477 | MDExOlB1bGxSZXF1ZXN0NTU5MTM3MDU4 | 9,728 | Fix some TF slow tests | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | # What does this PR do?
This PR fixes several slow tests related to saved model creation.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9728/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9728/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9728",
"html_url": "https://github.com/huggingface/transformers/pull/9728",
"diff_url": "https://github.com/huggingface/transformers/pull/9728.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9728.patch",
"merged_at": 1611323446000
} |
https://api.github.com/repos/huggingface/transformers/issues/9727 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9727/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9727/comments | https://api.github.com/repos/huggingface/transformers/issues/9727/events | https://github.com/huggingface/transformers/issues/9727 | 790,968,872 | MDU6SXNzdWU3OTA5Njg4NzI= | 9,727 | ERROR about using layer_past and use_cache in Attention Layer of GPT2 | {
"login": "ouwenjie03",
"id": 5829193,
"node_id": "MDQ6VXNlcjU4MjkxOTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5829193?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ouwenjie03",
"html_url": "https://github.com/ouwenjie03",
"followers_url": "https://api.github.com/users/ouwenjie03/followers",
"following_url": "https://api.github.com/users/ouwenjie03/following{/other_user}",
"gists_url": "https://api.github.com/users/ouwenjie03/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ouwenjie03/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ouwenjie03/subscriptions",
"organizations_url": "https://api.github.com/users/ouwenjie03/orgs",
"repos_url": "https://api.github.com/users/ouwenjie03/repos",
"events_url": "https://api.github.com/users/ouwenjie03/events{/privacy}",
"received_events_url": "https://api.github.com/users/ouwenjie03/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey @ouwenjie03,\r\n\r\nyou should not do this steps:\r\n\r\n```python\r\ninput_ids = torch.cat([input_ids, added_input_ids], dim=1)\r\n```\r\n\r\nIt should just be \r\n\r\n```pyhton \r\ninput_ids = added_input_ids\r\n```\r\n\r\nWhen passing `past_key_values` the input_ids should correspond **only** to the last tokens. I think if you take a look at this test: https://github.com/huggingface/transformers/blob/3f290e6c8403c6a2cf80dce068869793bde49540/tests/test_modeling_gpt2.py#L446 you'll understand a bit better.\r\n",
"Oh, get it! Thank you so much!"
] | 1,611 | 1,611 | 1,611 | NONE | null | Hi,
I am trying to use "use_cache" and "past_key_values" to speed up the decode steps.
But I have some questions about the Attention Layer, here are some codes in forward function:
[layer_past code](https://github.com/huggingface/transformers/blob/master/src/transformers/models/gpt2/modeling_gpt2.py#L230)
``` python
query = self.split_heads(query)
key = self.split_heads(key, k=True)
value = self.split_heads(value)
if layer_past is not None:
past_key, past_value = layer_past[0].transpose(-2, -1), layer_past[1] # transpose back cf below
key = torch.cat((past_key, key), dim=-1)
value = torch.cat((past_value, value), dim=-2)
```
If I send the layer_past value, it raise a size unmatched ERROR.
It shows that the shapes of "key" and "value" are not match with the attention mask.
Maybe the "key" before torch.cat has the same shape as the attention mask, but after torch.cat with past_key, the shape of "key" change.
Here is an example:
``` python
import torch
from transformers import GPT2Model, GPT2Config
config = GPT2Config()
config.use_cache = True
model = GPT2Model(config=config)
input_ids = torch.randint(0, 100, (2, 6))
attention_mask = torch.tensor([[1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1]], dtype=torch.bool)
past_key_values = None
outputs = model(input_ids=input_ids, attention_mask=attention_mask, past_key_values=None)
logits = outputs[0]
past_key_values = outputs[1]
print(logits.size())
print(len(past_key_values))
print([_kv.size() for _kv in past_key_values])
# we get the past_key_values and add the next step decoder input.
added_input_ids = torch.randint(0, 100, (2, 1))
added_attention_mask = torch.tensor([[1], [1]], dtype=torch.bool)
input_ids = torch.cat([input_ids, added_input_ids], dim=1)
attention_mask = torch.cat([attention_mask, added_attention_mask], dim=1)
print(input_ids.size(), attention_mask.size())
outputs = model(input_ids=input_ids, attention_mask=attention_mask, past_key_values=past_key_values)
# here occur the ERROR
logits = outputs[0]
past_key_values = outputs[1]
print(logits.size())
print(len(past_key_values))
print([_kv.size() for _kv in past_key_values])
```
```
/usr/local/lib/python3.6/dist-packages/transformers/models/gpt2/modeling_gpt2.py in _attn(self, q, k, v, attention_mask, head_mask, output_attentions)
175 if attention_mask is not None:
176 # Apply the attention mask
--> 177 w = w + attention_mask
178
179 w = nn.Softmax(dim=-1)(w)
RuntimeError: The size of tensor a (13) must match the size of tensor b (7) at non-singleton dimension 3
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9727/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9727/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9726 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9726/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9726/comments | https://api.github.com/repos/huggingface/transformers/issues/9726/events | https://github.com/huggingface/transformers/pull/9726 | 790,937,321 | MDExOlB1bGxSZXF1ZXN0NTU5MDcyNTcz | 9,726 | fix T5 head mask in model_parallel | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"That's a better solution actually! Thanks @patil-suraj ",
"Also, there is a whole bunch of issues including this one I believe fixed in this PR: https://github.com/huggingface/transformers/pull/9323 where we no longer do it one by one."
] | 1,611 | 1,611 | 1,611 | MEMBER | null | # What does this PR do?
`head_mask` in T5 is not parallelized correctly in model parallel, each layer's head mask should be put on that layer's device if it's not `None`.
Fixes #9718 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9726/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9726/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9726",
"html_url": "https://github.com/huggingface/transformers/pull/9726",
"diff_url": "https://github.com/huggingface/transformers/pull/9726.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9726.patch",
"merged_at": 1611227775000
} |
https://api.github.com/repos/huggingface/transformers/issues/9725 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9725/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9725/comments | https://api.github.com/repos/huggingface/transformers/issues/9725/events | https://github.com/huggingface/transformers/issues/9725 | 790,928,729 | MDU6SXNzdWU3OTA5Mjg3Mjk= | 9,725 | AutoModel doesn't work with DPRContextEncoder | {
"login": "antoniolanza1996",
"id": 40452030,
"node_id": "MDQ6VXNlcjQwNDUyMDMw",
"avatar_url": "https://avatars.githubusercontent.com/u/40452030?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/antoniolanza1996",
"html_url": "https://github.com/antoniolanza1996",
"followers_url": "https://api.github.com/users/antoniolanza1996/followers",
"following_url": "https://api.github.com/users/antoniolanza1996/following{/other_user}",
"gists_url": "https://api.github.com/users/antoniolanza1996/gists{/gist_id}",
"starred_url": "https://api.github.com/users/antoniolanza1996/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/antoniolanza1996/subscriptions",
"organizations_url": "https://api.github.com/users/antoniolanza1996/orgs",
"repos_url": "https://api.github.com/users/antoniolanza1996/repos",
"events_url": "https://api.github.com/users/antoniolanza1996/events{/privacy}",
"received_events_url": "https://api.github.com/users/antoniolanza1996/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread.",
"I ran into this recently and posted a question about it [on the forum](https://discuss.huggingface.co/t/dpr-pretrained-context-encoder-unused-weight-warning/11265), awaiting response.\r\n\r\nI don't have a solution but, as a workaround, if you use `DPRContextEncoder` instead of `AutoModel`,\r\n```\r\nfrom transformers import DPRContextEncoder\r\ncontext_encoder = DPRContextEncoder.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n```\r\nyou don't get the runtime warning."
] | 1,611 | 1,635 | 1,614 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.2.2
- Platform: Darwin-19.6.0-x86_64-i386-64bit
- Python version: 3.7.9
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@lhoestq
@patrickvonplaten
## Information
If I run:
```python
model = AutoModel.from_pretrained('facebook/dpr-ctx_encoder-single-nq-base')
```
AutoModel infers `DPRQuestionEncoder` but not the correct class (i.e. `DPRContextEncoder`). Thus we can't use the correct model weights.
The output is:
```
Some weights of the model checkpoint at facebook/dpr-ctx_encoder-single-nq-base were not used when initializing DPRQuestionEncoder: ['ctx_encoder.bert_model.embeddings.word_embeddings.weight', 'ctx_encoder.bert_model.embeddings.position_embeddings.weight', 'ctx_encoder.bert_model.embeddings.token_type_embeddings.weight', 'ctx_encoder.bert_model.embeddings.LayerNorm.weight', 'ctx_encoder.bert_model.embeddings.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.0.attention.self.query.weight', 'ctx_encoder.bert_model.encoder.layer.0.attention.self.query.bias', 'ctx_encoder.bert_model.encoder.layer.0.attention.self.key.weight', 'ctx_encoder.bert_model.encoder.layer.0.attention.self.key.bias', 'ctx_encoder.bert_model.encoder.layer.0.attention.self.value.weight', 'ctx_encoder.bert_model.encoder.layer.0.attention.self.value.bias', 'ctx_encoder.bert_model.encoder.layer.0.attention.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.0.attention.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.0.attention.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.0.attention.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.0.intermediate.dense.weight', 'ctx_encoder.bert_model.encoder.layer.0.intermediate.dense.bias', 'ctx_encoder.bert_model.encoder.layer.0.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.0.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.0.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.0.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.1.attention.self.query.weight', 'ctx_encoder.bert_model.encoder.layer.1.attention.self.query.bias', 'ctx_encoder.bert_model.encoder.layer.1.attention.self.key.weight', 'ctx_encoder.bert_model.encoder.layer.1.attention.self.key.bias', 'ctx_encoder.bert_model.encoder.layer.1.attention.self.value.weight', 'ctx_encoder.bert_model.encoder.layer.1.attention.self.value.bias', 'ctx_encoder.bert_model.encoder.layer.1.attention.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.1.attention.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.1.attention.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.1.attention.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.1.intermediate.dense.weight', 'ctx_encoder.bert_model.encoder.layer.1.intermediate.dense.bias', 'ctx_encoder.bert_model.encoder.layer.1.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.1.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.1.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.1.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.2.attention.self.query.weight', 'ctx_encoder.bert_model.encoder.layer.2.attention.self.query.bias', 'ctx_encoder.bert_model.encoder.layer.2.attention.self.key.weight', 'ctx_encoder.bert_model.encoder.layer.2.attention.self.key.bias', 'ctx_encoder.bert_model.encoder.layer.2.attention.self.value.weight', 'ctx_encoder.bert_model.encoder.layer.2.attention.self.value.bias', 'ctx_encoder.bert_model.encoder.layer.2.attention.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.2.attention.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.2.attention.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.2.attention.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.2.intermediate.dense.weight', 'ctx_encoder.bert_model.encoder.layer.2.intermediate.dense.bias', 'ctx_encoder.bert_model.encoder.layer.2.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.2.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.2.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.2.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.3.attention.self.query.weight', 'ctx_encoder.bert_model.encoder.layer.3.attention.self.query.bias', 'ctx_encoder.bert_model.encoder.layer.3.attention.self.key.weight', 'ctx_encoder.bert_model.encoder.layer.3.attention.self.key.bias', 'ctx_encoder.bert_model.encoder.layer.3.attention.self.value.weight', 'ctx_encoder.bert_model.encoder.layer.3.attention.self.value.bias', 'ctx_encoder.bert_model.encoder.layer.3.attention.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.3.attention.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.3.attention.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.3.attention.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.3.intermediate.dense.weight', 'ctx_encoder.bert_model.encoder.layer.3.intermediate.dense.bias', 'ctx_encoder.bert_model.encoder.layer.3.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.3.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.3.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.3.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.4.attention.self.query.weight', 'ctx_encoder.bert_model.encoder.layer.4.attention.self.query.bias', 'ctx_encoder.bert_model.encoder.layer.4.attention.self.key.weight', 'ctx_encoder.bert_model.encoder.layer.4.attention.self.key.bias', 'ctx_encoder.bert_model.encoder.layer.4.attention.self.value.weight', 'ctx_encoder.bert_model.encoder.layer.4.attention.self.value.bias', 'ctx_encoder.bert_model.encoder.layer.4.attention.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.4.attention.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.4.attention.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.4.attention.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.4.intermediate.dense.weight', 'ctx_encoder.bert_model.encoder.layer.4.intermediate.dense.bias', 'ctx_encoder.bert_model.encoder.layer.4.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.4.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.4.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.4.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.5.attention.self.query.weight', 'ctx_encoder.bert_model.encoder.layer.5.attention.self.query.bias', 'ctx_encoder.bert_model.encoder.layer.5.attention.self.key.weight', 'ctx_encoder.bert_model.encoder.layer.5.attention.self.key.bias', 'ctx_encoder.bert_model.encoder.layer.5.attention.self.value.weight', 'ctx_encoder.bert_model.encoder.layer.5.attention.self.value.bias', 'ctx_encoder.bert_model.encoder.layer.5.attention.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.5.attention.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.5.attention.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.5.attention.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.5.intermediate.dense.weight', 'ctx_encoder.bert_model.encoder.layer.5.intermediate.dense.bias', 'ctx_encoder.bert_model.encoder.layer.5.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.5.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.5.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.5.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.6.attention.self.query.weight', 'ctx_encoder.bert_model.encoder.layer.6.attention.self.query.bias', 'ctx_encoder.bert_model.encoder.layer.6.attention.self.key.weight', 'ctx_encoder.bert_model.encoder.layer.6.attention.self.key.bias', 'ctx_encoder.bert_model.encoder.layer.6.attention.self.value.weight', 'ctx_encoder.bert_model.encoder.layer.6.attention.self.value.bias', 'ctx_encoder.bert_model.encoder.layer.6.attention.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.6.attention.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.6.attention.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.6.attention.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.6.intermediate.dense.weight', 'ctx_encoder.bert_model.encoder.layer.6.intermediate.dense.bias', 'ctx_encoder.bert_model.encoder.layer.6.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.6.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.6.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.6.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.7.attention.self.query.weight', 'ctx_encoder.bert_model.encoder.layer.7.attention.self.query.bias', 'ctx_encoder.bert_model.encoder.layer.7.attention.self.key.weight', 'ctx_encoder.bert_model.encoder.layer.7.attention.self.key.bias', 'ctx_encoder.bert_model.encoder.layer.7.attention.self.value.weight', 'ctx_encoder.bert_model.encoder.layer.7.attention.self.value.bias', 'ctx_encoder.bert_model.encoder.layer.7.attention.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.7.attention.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.7.attention.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.7.attention.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.7.intermediate.dense.weight', 'ctx_encoder.bert_model.encoder.layer.7.intermediate.dense.bias', 'ctx_encoder.bert_model.encoder.layer.7.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.7.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.7.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.7.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.8.attention.self.query.weight', 'ctx_encoder.bert_model.encoder.layer.8.attention.self.query.bias', 'ctx_encoder.bert_model.encoder.layer.8.attention.self.key.weight', 'ctx_encoder.bert_model.encoder.layer.8.attention.self.key.bias', 'ctx_encoder.bert_model.encoder.layer.8.attention.self.value.weight', 'ctx_encoder.bert_model.encoder.layer.8.attention.self.value.bias', 'ctx_encoder.bert_model.encoder.layer.8.attention.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.8.attention.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.8.attention.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.8.attention.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.8.intermediate.dense.weight', 'ctx_encoder.bert_model.encoder.layer.8.intermediate.dense.bias', 'ctx_encoder.bert_model.encoder.layer.8.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.8.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.8.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.8.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.9.attention.self.query.weight', 'ctx_encoder.bert_model.encoder.layer.9.attention.self.query.bias', 'ctx_encoder.bert_model.encoder.layer.9.attention.self.key.weight', 'ctx_encoder.bert_model.encoder.layer.9.attention.self.key.bias', 'ctx_encoder.bert_model.encoder.layer.9.attention.self.value.weight', 'ctx_encoder.bert_model.encoder.layer.9.attention.self.value.bias', 'ctx_encoder.bert_model.encoder.layer.9.attention.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.9.attention.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.9.attention.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.9.attention.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.9.intermediate.dense.weight', 'ctx_encoder.bert_model.encoder.layer.9.intermediate.dense.bias', 'ctx_encoder.bert_model.encoder.layer.9.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.9.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.9.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.9.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.10.attention.self.query.weight', 'ctx_encoder.bert_model.encoder.layer.10.attention.self.query.bias', 'ctx_encoder.bert_model.encoder.layer.10.attention.self.key.weight', 'ctx_encoder.bert_model.encoder.layer.10.attention.self.key.bias', 'ctx_encoder.bert_model.encoder.layer.10.attention.self.value.weight', 'ctx_encoder.bert_model.encoder.layer.10.attention.self.value.bias', 'ctx_encoder.bert_model.encoder.layer.10.attention.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.10.attention.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.10.attention.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.10.attention.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.10.intermediate.dense.weight', 'ctx_encoder.bert_model.encoder.layer.10.intermediate.dense.bias', 'ctx_encoder.bert_model.encoder.layer.10.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.10.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.10.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.10.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.11.attention.self.query.weight', 'ctx_encoder.bert_model.encoder.layer.11.attention.self.query.bias', 'ctx_encoder.bert_model.encoder.layer.11.attention.self.key.weight', 'ctx_encoder.bert_model.encoder.layer.11.attention.self.key.bias', 'ctx_encoder.bert_model.encoder.layer.11.attention.self.value.weight', 'ctx_encoder.bert_model.encoder.layer.11.attention.self.value.bias', 'ctx_encoder.bert_model.encoder.layer.11.attention.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.11.attention.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.11.attention.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.11.attention.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.11.intermediate.dense.weight', 'ctx_encoder.bert_model.encoder.layer.11.intermediate.dense.bias', 'ctx_encoder.bert_model.encoder.layer.11.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.11.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.11.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.11.output.LayerNorm.bias', 'ctx_encoder.bert_model.pooler.dense.weight', 'ctx_encoder.bert_model.pooler.dense.bias']
- This IS expected if you are initializing DPRQuestionEncoder from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing DPRQuestionEncoder from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of DPRQuestionEncoder were not initialized from the model checkpoint at facebook/dpr-ctx_encoder-single-nq-base and are newly initialized: ['bert_model.embeddings.word_embeddings.weight', 'bert_model.embeddings.position_embeddings.weight', 'bert_model.embeddings.token_type_embeddings.weight', 'bert_model.embeddings.LayerNorm.weight', 'bert_model.embeddings.LayerNorm.bias', 'bert_model.encoder.layer.0.attention.self.query.weight', 'bert_model.encoder.layer.0.attention.self.query.bias', 'bert_model.encoder.layer.0.attention.self.key.weight', 'bert_model.encoder.layer.0.attention.self.key.bias', 'bert_model.encoder.layer.0.attention.self.value.weight', 'bert_model.encoder.layer.0.attention.self.value.bias', 'bert_model.encoder.layer.0.attention.output.dense.weight', 'bert_model.encoder.layer.0.attention.output.dense.bias', 'bert_model.encoder.layer.0.attention.output.LayerNorm.weight', 'bert_model.encoder.layer.0.attention.output.LayerNorm.bias', 'bert_model.encoder.layer.0.intermediate.dense.weight', 'bert_model.encoder.layer.0.intermediate.dense.bias', 'bert_model.encoder.layer.0.output.dense.weight', 'bert_model.encoder.layer.0.output.dense.bias', 'bert_model.encoder.layer.0.output.LayerNorm.weight', 'bert_model.encoder.layer.0.output.LayerNorm.bias', 'bert_model.encoder.layer.1.attention.self.query.weight', 'bert_model.encoder.layer.1.attention.self.query.bias', 'bert_model.encoder.layer.1.attention.self.key.weight', 'bert_model.encoder.layer.1.attention.self.key.bias', 'bert_model.encoder.layer.1.attention.self.value.weight', 'bert_model.encoder.layer.1.attention.self.value.bias', 'bert_model.encoder.layer.1.attention.output.dense.weight', 'bert_model.encoder.layer.1.attention.output.dense.bias', 'bert_model.encoder.layer.1.attention.output.LayerNorm.weight', 'bert_model.encoder.layer.1.attention.output.LayerNorm.bias', 'bert_model.encoder.layer.1.intermediate.dense.weight', 'bert_model.encoder.layer.1.intermediate.dense.bias', 'bert_model.encoder.layer.1.output.dense.weight', 'bert_model.encoder.layer.1.output.dense.bias', 'bert_model.encoder.layer.1.output.LayerNorm.weight', 'bert_model.encoder.layer.1.output.LayerNorm.bias', 'bert_model.encoder.layer.2.attention.self.query.weight', 'bert_model.encoder.layer.2.attention.self.query.bias', 'bert_model.encoder.layer.2.attention.self.key.weight', 'bert_model.encoder.layer.2.attention.self.key.bias', 'bert_model.encoder.layer.2.attention.self.value.weight', 'bert_model.encoder.layer.2.attention.self.value.bias', 'bert_model.encoder.layer.2.attention.output.dense.weight', 'bert_model.encoder.layer.2.attention.output.dense.bias', 'bert_model.encoder.layer.2.attention.output.LayerNorm.weight', 'bert_model.encoder.layer.2.attention.output.LayerNorm.bias', 'bert_model.encoder.layer.2.intermediate.dense.weight', 'bert_model.encoder.layer.2.intermediate.dense.bias', 'bert_model.encoder.layer.2.output.dense.weight', 'bert_model.encoder.layer.2.output.dense.bias', 'bert_model.encoder.layer.2.output.LayerNorm.weight', 'bert_model.encoder.layer.2.output.LayerNorm.bias', 'bert_model.encoder.layer.3.attention.self.query.weight', 'bert_model.encoder.layer.3.attention.self.query.bias', 'bert_model.encoder.layer.3.attention.self.key.weight', 'bert_model.encoder.layer.3.attention.self.key.bias', 'bert_model.encoder.layer.3.attention.self.value.weight', 'bert_model.encoder.layer.3.attention.self.value.bias', 'bert_model.encoder.layer.3.attention.output.dense.weight', 'bert_model.encoder.layer.3.attention.output.dense.bias', 'bert_model.encoder.layer.3.attention.output.LayerNorm.weight', 'bert_model.encoder.layer.3.attention.output.LayerNorm.bias', 'bert_model.encoder.layer.3.intermediate.dense.weight', 'bert_model.encoder.layer.3.intermediate.dense.bias', 'bert_model.encoder.layer.3.output.dense.weight', 'bert_model.encoder.layer.3.output.dense.bias', 'bert_model.encoder.layer.3.output.LayerNorm.weight', 'bert_model.encoder.layer.3.output.LayerNorm.bias', 'bert_model.encoder.layer.4.attention.self.query.weight', 'bert_model.encoder.layer.4.attention.self.query.bias', 'bert_model.encoder.layer.4.attention.self.key.weight', 'bert_model.encoder.layer.4.attention.self.key.bias', 'bert_model.encoder.layer.4.attention.self.value.weight', 'bert_model.encoder.layer.4.attention.self.value.bias', 'bert_model.encoder.layer.4.attention.output.dense.weight', 'bert_model.encoder.layer.4.attention.output.dense.bias', 'bert_model.encoder.layer.4.attention.output.LayerNorm.weight', 'bert_model.encoder.layer.4.attention.output.LayerNorm.bias', 'bert_model.encoder.layer.4.intermediate.dense.weight', 'bert_model.encoder.layer.4.intermediate.dense.bias', 'bert_model.encoder.layer.4.output.dense.weight', 'bert_model.encoder.layer.4.output.dense.bias', 'bert_model.encoder.layer.4.output.LayerNorm.weight', 'bert_model.encoder.layer.4.output.LayerNorm.bias', 'bert_model.encoder.layer.5.attention.self.query.weight', 'bert_model.encoder.layer.5.attention.self.query.bias', 'bert_model.encoder.layer.5.attention.self.key.weight', 'bert_model.encoder.layer.5.attention.self.key.bias', 'bert_model.encoder.layer.5.attention.self.value.weight', 'bert_model.encoder.layer.5.attention.self.value.bias', 'bert_model.encoder.layer.5.attention.output.dense.weight', 'bert_model.encoder.layer.5.attention.output.dense.bias', 'bert_model.encoder.layer.5.attention.output.LayerNorm.weight', 'bert_model.encoder.layer.5.attention.output.LayerNorm.bias', 'bert_model.encoder.layer.5.intermediate.dense.weight', 'bert_model.encoder.layer.5.intermediate.dense.bias', 'bert_model.encoder.layer.5.output.dense.weight', 'bert_model.encoder.layer.5.output.dense.bias', 'bert_model.encoder.layer.5.output.LayerNorm.weight', 'bert_model.encoder.layer.5.output.LayerNorm.bias', 'bert_model.encoder.layer.6.attention.self.query.weight', 'bert_model.encoder.layer.6.attention.self.query.bias', 'bert_model.encoder.layer.6.attention.self.key.weight', 'bert_model.encoder.layer.6.attention.self.key.bias', 'bert_model.encoder.layer.6.attention.self.value.weight', 'bert_model.encoder.layer.6.attention.self.value.bias', 'bert_model.encoder.layer.6.attention.output.dense.weight', 'bert_model.encoder.layer.6.attention.output.dense.bias', 'bert_model.encoder.layer.6.attention.output.LayerNorm.weight', 'bert_model.encoder.layer.6.attention.output.LayerNorm.bias', 'bert_model.encoder.layer.6.intermediate.dense.weight', 'bert_model.encoder.layer.6.intermediate.dense.bias', 'bert_model.encoder.layer.6.output.dense.weight', 'bert_model.encoder.layer.6.output.dense.bias', 'bert_model.encoder.layer.6.output.LayerNorm.weight', 'bert_model.encoder.layer.6.output.LayerNorm.bias', 'bert_model.encoder.layer.7.attention.self.query.weight', 'bert_model.encoder.layer.7.attention.self.query.bias', 'bert_model.encoder.layer.7.attention.self.key.weight', 'bert_model.encoder.layer.7.attention.self.key.bias', 'bert_model.encoder.layer.7.attention.self.value.weight', 'bert_model.encoder.layer.7.attention.self.value.bias', 'bert_model.encoder.layer.7.attention.output.dense.weight', 'bert_model.encoder.layer.7.attention.output.dense.bias', 'bert_model.encoder.layer.7.attention.output.LayerNorm.weight', 'bert_model.encoder.layer.7.attention.output.LayerNorm.bias', 'bert_model.encoder.layer.7.intermediate.dense.weight', 'bert_model.encoder.layer.7.intermediate.dense.bias', 'bert_model.encoder.layer.7.output.dense.weight', 'bert_model.encoder.layer.7.output.dense.bias', 'bert_model.encoder.layer.7.output.LayerNorm.weight', 'bert_model.encoder.layer.7.output.LayerNorm.bias', 'bert_model.encoder.layer.8.attention.self.query.weight', 'bert_model.encoder.layer.8.attention.self.query.bias', 'bert_model.encoder.layer.8.attention.self.key.weight', 'bert_model.encoder.layer.8.attention.self.key.bias', 'bert_model.encoder.layer.8.attention.self.value.weight', 'bert_model.encoder.layer.8.attention.self.value.bias', 'bert_model.encoder.layer.8.attention.output.dense.weight', 'bert_model.encoder.layer.8.attention.output.dense.bias', 'bert_model.encoder.layer.8.attention.output.LayerNorm.weight', 'bert_model.encoder.layer.8.attention.output.LayerNorm.bias', 'bert_model.encoder.layer.8.intermediate.dense.weight', 'bert_model.encoder.layer.8.intermediate.dense.bias', 'bert_model.encoder.layer.8.output.dense.weight', 'bert_model.encoder.layer.8.output.dense.bias', 'bert_model.encoder.layer.8.output.LayerNorm.weight', 'bert_model.encoder.layer.8.output.LayerNorm.bias', 'bert_model.encoder.layer.9.attention.self.query.weight', 'bert_model.encoder.layer.9.attention.self.query.bias', 'bert_model.encoder.layer.9.attention.self.key.weight', 'bert_model.encoder.layer.9.attention.self.key.bias', 'bert_model.encoder.layer.9.attention.self.value.weight', 'bert_model.encoder.layer.9.attention.self.value.bias', 'bert_model.encoder.layer.9.attention.output.dense.weight', 'bert_model.encoder.layer.9.attention.output.dense.bias', 'bert_model.encoder.layer.9.attention.output.LayerNorm.weight', 'bert_model.encoder.layer.9.attention.output.LayerNorm.bias', 'bert_model.encoder.layer.9.intermediate.dense.weight', 'bert_model.encoder.layer.9.intermediate.dense.bias', 'bert_model.encoder.layer.9.output.dense.weight', 'bert_model.encoder.layer.9.output.dense.bias', 'bert_model.encoder.layer.9.output.LayerNorm.weight', 'bert_model.encoder.layer.9.output.LayerNorm.bias', 'bert_model.encoder.layer.10.attention.self.query.weight', 'bert_model.encoder.layer.10.attention.self.query.bias', 'bert_model.encoder.layer.10.attention.self.key.weight', 'bert_model.encoder.layer.10.attention.self.key.bias', 'bert_model.encoder.layer.10.attention.self.value.weight', 'bert_model.encoder.layer.10.attention.self.value.bias', 'bert_model.encoder.layer.10.attention.output.dense.weight', 'bert_model.encoder.layer.10.attention.output.dense.bias', 'bert_model.encoder.layer.10.attention.output.LayerNorm.weight', 'bert_model.encoder.layer.10.attention.output.LayerNorm.bias', 'bert_model.encoder.layer.10.intermediate.dense.weight', 'bert_model.encoder.layer.10.intermediate.dense.bias', 'bert_model.encoder.layer.10.output.dense.weight', 'bert_model.encoder.layer.10.output.dense.bias', 'bert_model.encoder.layer.10.output.LayerNorm.weight', 'bert_model.encoder.layer.10.output.LayerNorm.bias', 'bert_model.encoder.layer.11.attention.self.query.weight', 'bert_model.encoder.layer.11.attention.self.query.bias', 'bert_model.encoder.layer.11.attention.self.key.weight', 'bert_model.encoder.layer.11.attention.self.key.bias', 'bert_model.encoder.layer.11.attention.self.value.weight', 'bert_model.encoder.layer.11.attention.self.value.bias', 'bert_model.encoder.layer.11.attention.output.dense.weight', 'bert_model.encoder.layer.11.attention.output.dense.bias', 'bert_model.encoder.layer.11.attention.output.LayerNorm.weight', 'bert_model.encoder.layer.11.attention.output.LayerNorm.bias', 'bert_model.encoder.layer.11.intermediate.dense.weight', 'bert_model.encoder.layer.11.intermediate.dense.bias', 'bert_model.encoder.layer.11.output.dense.weight', 'bert_model.encoder.layer.11.output.dense.bias', 'bert_model.encoder.layer.11.output.LayerNorm.weight', 'bert_model.encoder.layer.11.output.LayerNorm.bias', 'bert_model.pooler.dense.weight', 'bert_model.pooler.dense.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
I think could be useful to generalise this behaviour and automatically detect whether the model is a context/document encoder or question encoder. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9725/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9725/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9724 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9724/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9724/comments | https://api.github.com/repos/huggingface/transformers/issues/9724/events | https://github.com/huggingface/transformers/issues/9724 | 790,900,093 | MDU6SXNzdWU3OTA5MDAwOTM= | 9,724 | Run_ner.py falsely aligns prediction list | {
"login": "Stimmot",
"id": 29411999,
"node_id": "MDQ6VXNlcjI5NDExOTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/29411999?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Stimmot",
"html_url": "https://github.com/Stimmot",
"followers_url": "https://api.github.com/users/Stimmot/followers",
"following_url": "https://api.github.com/users/Stimmot/following{/other_user}",
"gists_url": "https://api.github.com/users/Stimmot/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Stimmot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Stimmot/subscriptions",
"organizations_url": "https://api.github.com/users/Stimmot/orgs",
"repos_url": "https://api.github.com/users/Stimmot/repos",
"events_url": "https://api.github.com/users/Stimmot/events{/privacy}",
"received_events_url": "https://api.github.com/users/Stimmot/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"You can check https://github.com/uf-hobi-informatics-lab/ClinicalTransformerNER\r\nWe essentially wrap the models in transformers with our NER implementation that can handle sentences longer than the max_len.",
"Hi @Stimmot ,\r\n\r\ndid you run the `scripts/preprocess.py` to make sure that there are no sentences > 300 subtokens in your final data splits :thinking: \r\n\r\nThis should heavily prevent these kind of \"maximum sequence length execeeded\" errors :)",
"Thank you for the response @stefan-it, but I don't think it actually has to do with the sequence lengths. The longest sentences in the documents are around 200 tokens, not even near the maximal length.\r\nBesides, the script crashes at some point with the following error:\r\n```\r\nTraceback (most recent call last):\r\n File \"/tmp/pycharm_project_44/examples/token-classification/run_ner_crossval.py\", line 226, in <module>\r\n main(sys.argv[1])\r\n File \"/tmp/pycharm_project_44/examples/token-classification/run_ner_crossval.py\", line 153, in main\r\n result_string, pred_dict = run_ner.main(json_config)\r\n File \"/tmp/pycharm_project_44/examples/token-classification/run_ner.py\", line 359, in main\r\n token_classification_task.write_predictions_to_file(writer, f, preds_list)\r\n File \"/tmp/pycharm_project_44/examples/token-classification/tasks.py\", line 53, in write_predictions_to_file\r\n elif preds_list[example_id]:\r\nIndexError: list index out of range\r\n100%|█████████████████████████████████████████████| 1/1 [00:00<00:00, 1.70it/s]\r\n```\r\nThe script doesn't work through the predictions list as it normally would, since the indices are shifted. Any other ideas what this could be?\r\n\r\n(Thank you @bugface as well but as said above I don't think it's because of the sequence length)",
"@stefan-it I now also used the preprocess.py script just to be sure, but unfortunately it didn't change anything.",
"Hi @Stimmot ,\r\n\r\nI think I found an interesting information in your provided log: `cached_test_BertTokenizer_340.lock`\r\n\r\nThat means, that the dataset was initially pre-processed with a sequence length of 340! Then I think you changed the max. sequence length to 300, but your ner script is still using the cached pre-processed test dataset that has a max. sequence length of 340.\r\n\r\nCould you try to remove all `cached*` files, so that the dataset features are newly written :thinking: Hope this helps :)",
"Thanks @stefan-it, it had indeed to do with the cached models.\r\nI built a script that one by one takes test documents and runs the run_ner.py script on them, however, it seems that it saves a cached version of the model for the first document (which it predicts correctly) and then uses this cached version for all subsequent ones. The cached dimensions of the document don't work on the new ones so, naturally, it runs into errors.\r\n\r\nThe solution is to let the model be rebuilt after each run, without using cached verisons.\r\n\r\nAnother quick question on that: is there an option to tell the run_ner.py script not to build cached models, so that it will use new ones each time it predicts?",
"Yeah, this can be done via `--overwrite_cache` argument :hugs: ",
"Thank you very much, worked perfectly!",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,611 | 1,614 | 1,614 | NONE | null | ## Environment info
- `transformers` version: 3.5.1
- Platform: Linux-5.9.1-kd-cluster-x86_64-with-glibc2.10
- Python version: 3.8.0
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik
@stefan-it
## Information
I am using the bert-base-german-cased model, which I trained on an NER task to predict entity labels.
When I run prediction on the test dataset, BERT tells me that the maximum sequence length (300) is exhausted and so no predictions will be made on a number of items.
However, this seems to be a wrong error report. The problem appears to be that in the align_predictions function, the preds_list variable gets a wrong dimension - the predictions are made correctly, but the index is shifted, so that they point to the wrong words.
An example of what I mean:
```
Amtsgericht Ort
Leipzig Ort
Abteilung O
für O
Strafsachen O
```
becomes
```
Amtsgericht O
Leipzig O
Abteilung O
für Ort
Strafsachen Ort
```
in the preds_list.
The write_predictions_to_file function then gets tangled up by this shifted index and (I think falsely) declares a sequence length error.
Strangely, another test document works just fine, without any difference between them that I could find. No sequence length error and no false indices there.
The problem arises when using:
* [] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
I'm afraid it isn't so easy to reproduce because it requires the trained model and the data.
The align_predictions function:
```
def align_predictions(predictions: np.ndarray, label_ids: np.ndarray) -> Tuple[List[int], List[int]]:
preds = np.argmax(predictions, axis=2)
batch_size, seq_len = preds.shape
out_label_list = [[] for _ in range(batch_size)]
preds_list = [[] for _ in range(batch_size)]
for i in range(batch_size):
for j in range(seq_len):
if label_ids[i, j] != nn.CrossEntropyLoss().ignore_index:
out_label_list[i].append(label_map[label_ids[i][j]])
preds_list[i].append(label_map[preds[i][j]])
return preds_list, out_label_list
```
Console output:
```
01/21/2021 11:07:01 - INFO - filelock - Lock 140422091422400 released on /home/IAIS/tschmude/bert_remote/examples/token-classification/Data_processing_scripts/CrossVal_Files/Rotation/Test_file_swap/cached_test_BertTokenizer_340.lock
/home/IAIS/tschmude/anaconda3/envs/bert_env_remote/lib/python3.8/site-packages/torch/nn/parallel/_functions.py:64: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.
warnings.warn('Was asked to gather along dimension 0, but all '
0%| | 0/1 [00:00<?, ?it/s]/home/IAIS/tschmude/anaconda3/envs/bert_env_remote/lib/python3.8/site-packages/seqeval/metrics/v1.py:57: UndefinedMetricWarning: Recall and F-score are ill-defined and being set to 0.0 in labels with no true samples. Use `zero_division` parameter to control this behavior.
_warn_prf(average, modifier, msg_start, len(result))
01/21/2021 11:07:30 - INFO - run_ner -
precision recall f1-score support
Datum 1.00 1.00 1.00 1
Gestaendnis_ja 1.00 1.00 1.00 1
Ort 0.50 1.00 0.67 1
Schadensbetrag 1.00 1.00 1.00 1
Strafe_Gesamtfreiheitsstrafe_Dauer 1.00 1.00 1.00 1
Strafe_Tatbestand 0.00 0.00 0.00 0
Taeter_Drogenbezug_ja 1.00 1.00 1.00 1
micro avg 0.55 1.00 0.71 6
macro avg 0.79 0.86 0.81 6
weighted avg 0.92 1.00 0.94 6
/tmp/pycharm_project_44/src/transformers/trainer.py:1174: FutureWarning: This method is deprecated, use `Trainer.is_world_process_zero()` instead.
warnings.warn("This method is deprecated, use `Trainer.is_world_process_zero()` instead.", FutureWarning)
01/21/2021 11:08:52 - WARNING - tasks - Maximum sequence length exceeded: No prediction for 'Angeklagte'.
01/21/2021 11:08:52 - WARNING - tasks - Maximum sequence length exceeded: No prediction for 'trägt'.
01/21/2021 11:08:52 - WARNING - tasks - Maximum sequence length exceeded: No prediction for 'die'.
01/21/2021 11:08:52 - WARNING - tasks - Maximum sequence length exceeded: No prediction for 'Kosten'.
01/21/2021 11:08:52 - WARNING - tasks - Maximum sequence length exceeded: No prediction for 'des'.
01/21/2021 11:08:52 - WARNING - tasks - Maximum sequence length exceeded: No prediction for 'Verfahrens'.
(and a lot more of these)
```
Config that the model was trained on:
```
Training Arguments:
Data directory: ...ng_scripts/CrossVal_Files/Rotation/Train_file_swap
Model: ...bert-base-german-cased
Epochs: 8
Seq length: 300
Learning rate: 5e-05
Batch size: 16
Seed: 105
Do Train: True
Do Eval: True
Do Test: True
```
## Expected behavior
That the predictions for the documents are correctly listed and written to file without a sequence length error or shifted indices.
Thank you for your help! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9724/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9724/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9723 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9723/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9723/comments | https://api.github.com/repos/huggingface/transformers/issues/9723/events | https://github.com/huggingface/transformers/pull/9723 | 790,870,691 | MDExOlB1bGxSZXF1ZXN0NTU5MDE1NDM3 | 9,723 | [LED] Reduce Slow Test required GPU RAM from 16GB to 8GB | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR prevents the slow test:
`tests/test_modeling_led.py::LEDModelIntegrationTests::test_seq_to_seq_generation` from failing due by reducing the required GPU RAM to 8GB
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9723/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9723/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9723",
"html_url": "https://github.com/huggingface/transformers/pull/9723",
"diff_url": "https://github.com/huggingface/transformers/pull/9723.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9723.patch",
"merged_at": 1611224175000
} |
https://api.github.com/repos/huggingface/transformers/issues/9722 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9722/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9722/comments | https://api.github.com/repos/huggingface/transformers/issues/9722/events | https://github.com/huggingface/transformers/issues/9722 | 790,860,854 | MDU6SXNzdWU3OTA4NjA4NTQ= | 9,722 | convert_graph_to_onnx.convert broken for translation model facebook/wmt19-en-de | {
"login": "oborchers",
"id": 26734737,
"node_id": "MDQ6VXNlcjI2NzM0NzM3",
"avatar_url": "https://avatars.githubusercontent.com/u/26734737?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oborchers",
"html_url": "https://github.com/oborchers",
"followers_url": "https://api.github.com/users/oborchers/followers",
"following_url": "https://api.github.com/users/oborchers/following{/other_user}",
"gists_url": "https://api.github.com/users/oborchers/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oborchers/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oborchers/subscriptions",
"organizations_url": "https://api.github.com/users/oborchers/orgs",
"repos_url": "https://api.github.com/users/oborchers/repos",
"events_url": "https://api.github.com/users/oborchers/events{/privacy}",
"received_events_url": "https://api.github.com/users/oborchers/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thank you for this excellent report, @oborchers - I'll investigate and report back.",
"Fixed in https://github.com/huggingface/transformers/pull/9736 \r\n\r\nBut found another problem: https://github.com/huggingface/transformers/issues/9737. Fixed in https://github.com/huggingface/transformers/pull/9738\r\n\r\nSo you will need both PRs for your task to work in case you want to try before they are merged.\r\n\r\n\r\n",
"Awesome! Thank you, @stas00! Looking forward to try it out after PRs have been merged. Much appreciated ",
"The problem you reported has been fixed in https://github.com/huggingface/transformers/pull/9736 (merged already)\r\n\r\nBut then another one poped up in https://github.com/huggingface/transformers/issues/9737\r\n\r\nYou can just use the https://github.com/huggingface/transformers/pull/9738 branch - since it contains both fixes.\r\n\r\nNot sure how quickly it will get merged, since we might want to solve this for other models too. I made only a local for fsmt fix in that PR.",
"Great, thank you for the fast response and issue handling. I will provide a followup on #9738. While export works as intended, there is an issue I encounter while running the following code (built on 1st example):\r\n\r\n```\r\nsess = rt.InferenceSession(str(Path(\"encoder/en_de_trans.onnx\")), opt)\r\nspans = [\r\n \"My name is Bert\", # Succeeds\r\n \"My name is Bert and\" # Fails\r\n]\r\nfor span in spans:\r\n model_input = nlp.tokenizer.encode_plus(span)\r\n model_input = {name : np.atleast_2d(value) for name, value in model_input.items()}\r\n out = nlp.model(**nlp.tokenizer(span, return_tensors=\"pt\"))\r\n trans_1 = out[0].detach().cpu().numpy()\r\n trans_2 = out[1].detach().cpu().numpy()\r\n onnx_1, onnx_2 = sess.run(None, model_input)\r\n assert np.allclose(trans_1, onnx_1, atol=1e-5)\r\n assert np.allclose(trans_2, onnx_2, atol=1e-5)\r\n```\r\n\r\n\"My name is Bert and\" will raise:\r\n```\r\n---------------------------------------------------------------------------\r\nRuntimeException Traceback (most recent call last)\r\n<ipython-input-3-3ef2da9bdd5e> in <module>\r\n 10 trans_1 = out[0].detach().cpu().numpy()\r\n 11 trans_2 = out[1].detach().cpu().numpy()\r\n---> 12 onnx_1, onnx_2 = sess.run(None, model_input)\r\n 13 assert np.allclose(trans_1, onnx_1, atol=1e-5)\r\n 14 assert np.allclose(trans_2, onnx_2, atol=1e-5)\r\n\r\n~/anaconda3/envs/dev/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py in run(self, output_names, input_feed, run_options)\r\n 122 output_names = [output.name for output in self._outputs_meta]\r\n 123 try:\r\n--> 124 return self._sess.run(output_names, input_feed, run_options)\r\n 125 except C.EPFail as err:\r\n 126 if self._enable_fallback:\r\n\r\nRuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Reshape node. Name:'Reshape_74' Status Message: /data/shared/packages/onnxruntime/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:43 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape&, std::vector<long int>&) gsl::narrow_cast<int64_t>(input_shape.Size()) == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{1,6}, requested shape:{5}\r\n```\r\n\r\nSolely based on intuition I'd assume that some dynamic shape of was not inferred properly/not passed to the dynamic_shapes of torch.onnx.export. But thats just a quick guess. Or did I miss something?\r\n\r\n\r\nI see that I would have to look-into/re-implement the generate function, as only the tensors are passed back. I'm going to create a feature suggestion to support the [ORT Custom Ops](https://github.com/microsoft/ort-customops). Perhaps It would be possible to retrieve the actual translated string in the far future, instead of the tensors (or specify the output). \r\n\r\nAs promised follow up feature request + suggestion under #9784 ",
"Honestly, I don't know much about the ONNX-side of things. I asked @mfuntowicz to hopefully have a look and address this.\r\n\r\nAlso tagging @LysandreJik and @patrickvonplaten who perhaps may have some answers as well.\r\n\r\nI wonder if this is an issue project-wise, e.g. do you have the same issue if you do this on a Bart model? I'm asking since fsmt is Bart with some tweaks.\r\n\r\nAlso I think it's best to open a new issue, since now we are dealing with a different issue, so it'd be easier to track and monitor.",
"Thank you for your help, @stas00! I followed your advice and created a new issue.",
"@oborchers It seems that it is a problem of the pythorch export of the dynamic_axes.\r\nUsing the nightly version (torch-1.9.0.dev20210212 + cpu) it works.\r\n\r\nOn the other hand, I am interested in using the onnx models to generate, (translate and summarize). \r\nCould you give me some indication of how to do a custom forward using the onnx model, to use in the generation_utils.generate function.\r\n\r\nPS: for what you comment here [9784](https://github.com/huggingface/transformers/issues/9784) you plan to work on a User-specific re-implementation.\r\nThanks"
] | 1,611 | 1,613 | 1,611 | NONE | null | ## Environment info
- `transformers` version: 4.2.2
- Platform: Linux-4.15.0-132-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): 2.5.0 (True)
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: False
### Who can help
@mfuntowicz (based on initial commit of convert_graph_to_onnx)
@stas00 (based on model used here)
@thomwolf (based on history)
## Information
Model I am using (Bert, XLNet ...): facebook/wmt19-en-de
The problem arises when using:
* [X] the official example scripts: transformers.convert_graph_to_onnx.convert
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: converting the translation model to onnx
## To reproduce
Steps to reproduce the behavior:
```
import torch
import transformers
from transformers import convert_graph_to_onnx
from pathlib import Path
nlp = transformers.pipeline("translation_en_to_de", model="facebook/wmt19-en-de", tokenizer="facebook/wmt19-en-de")
convert_graph_to_onnx.convert(
framework="pt",
model="facebook/wmt19-en-de",
output=Path("encoder/en_de_trans.onnx"),
opset=12,
tokenizer="facebook/wmt19-en-de",
use_external_format= False,
pipeline_name= "translation_en_to_de",
)
```
Raises:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-1-d46bec961b86> in <module>
5
6 nlp = transformers.pipeline("translation_en_to_de", model="facebook/wmt19-en-de", tokenizer="facebook/wmt19-en-de")
----> 7 convert_graph_to_onnx.convert(
8 framework="pt",
9 model="facebook/wmt19-en-de",
~/anaconda3/envs/dev/lib/python3.8/site-packages/transformers/convert_graph_to_onnx.py in convert(framework, model, output, opset, tokenizer, use_external_format, pipeline_name)
365 # Export the graph
366 if framework == "pt":
--> 367 convert_pytorch(nlp, opset, output, use_external_format)
368 else:
369 convert_tensorflow(nlp, opset, output)
~/anaconda3/envs/dev/lib/python3.8/site-packages/transformers/convert_graph_to_onnx.py in convert_pytorch(nlp, opset, output, use_external_format)
274
275 with torch.no_grad():
--> 276 input_names, output_names, dynamic_axes, tokens = infer_shapes(nlp, "pt")
277 ordered_input_names, model_args = ensure_valid_input(nlp.model, tokens, input_names)
278
~/anaconda3/envs/dev/lib/python3.8/site-packages/transformers/convert_graph_to_onnx.py in infer_shapes(nlp, framework)
196 tokens = nlp.tokenizer("This is a sample output", return_tensors=framework)
197 seq_len = tokens.input_ids.shape[-1]
--> 198 outputs = nlp.model(**tokens) if framework == "pt" else nlp.model(tokens)
199 if isinstance(outputs, ModelOutput):
200 outputs = outputs.to_tuple()
~/anaconda3/envs/dev/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
TypeError: forward() got an unexpected keyword argument 'token_type_ids'
```
Subsequently, the call of the raise can be boiled down to inferring the shapes for [torch.onnx.export](https://github.com/huggingface/transformers/blob/6a346f0358a40f89ec384d441233bf54cac44f6a/src/transformers/convert_graph_to_onnx.py#L196)
I think that may be due to the incompatibility of the tokenizer() vs tokenizer.encode() for this very model.
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained("facebook/wmt19-en-de")
model = transformers.AutoModelForSeq2SeqLM.from_pretrained("facebook/wmt19-en-de")
string = "Hello. How are you?"
# model.generate(tokenizer(string, return_tensors="pt")) # Fails
model.generate(tokenizer.encode(string, return_tensors="pt")) # Succeeds
```
## Expected behavior
Model export should work properly.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9722/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9722/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9721 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9721/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9721/comments | https://api.github.com/repos/huggingface/transformers/issues/9721/events | https://github.com/huggingface/transformers/pull/9721 | 790,859,886 | MDExOlB1bGxSZXF1ZXN0NTU5MDA2MTA5 | 9,721 | [T5] Fix T5 model parallel tests | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Those tests were failing previously:
```
FAILED tests/test_modeling_t5.py::T5ModelTest::test_model_parallel_beam_search
FAILED tests/test_modeling_t5.py::T5ModelTest::test_model_parallel_equal_results
FAILED tests/test_modeling_t5.py::T5EncoderOnlyModelTest::test_model_parallel_equal_results
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9721/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9721/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9721",
"html_url": "https://github.com/huggingface/transformers/pull/9721",
"diff_url": "https://github.com/huggingface/transformers/pull/9721.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9721.patch",
"merged_at": 1611224233000
} |
https://api.github.com/repos/huggingface/transformers/issues/9720 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9720/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9720/comments | https://api.github.com/repos/huggingface/transformers/issues/9720/events | https://github.com/huggingface/transformers/pull/9720 | 790,851,821 | MDExOlB1bGxSZXF1ZXN0NTU4OTk5MTkx | 9,720 | Temporarily deactivate TPU tests while we work on fixing them | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | MEMBER | null | Temporarily deactivates TPU tests while we work on fixing them. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9720/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9720/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9720",
"html_url": "https://github.com/huggingface/transformers/pull/9720",
"diff_url": "https://github.com/huggingface/transformers/pull/9720.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9720.patch",
"merged_at": 1611220659000
} |
https://api.github.com/repos/huggingface/transformers/issues/9719 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9719/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9719/comments | https://api.github.com/repos/huggingface/transformers/issues/9719/events | https://github.com/huggingface/transformers/pull/9719 | 790,832,792 | MDExOlB1bGxSZXF1ZXN0NTU4OTgzMTE4 | 9,719 | [PretrainedModel] add tie_weights to init | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Actually the init takes care of this so no need for this PR"
] | 1,611 | 1,611 | 1,611 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
When someone wants to pretrain a model from scratch with `config.tie_word_embeddings=True`, one would expect that even
when doing:
```python
model = BertModel(BertConfig())
```
that the word embedding weights are tied. However this is not the case at the moment. This PR fixes it
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9719/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9719/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9719",
"html_url": "https://github.com/huggingface/transformers/pull/9719",
"diff_url": "https://github.com/huggingface/transformers/pull/9719.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9719.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9718 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9718/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9718/comments | https://api.github.com/repos/huggingface/transformers/issues/9718/events | https://github.com/huggingface/transformers/issues/9718 | 790,825,593 | MDU6SXNzdWU3OTA4MjU1OTM= | 9,718 | T5 Model Parallelism in 4.3.0 | {
"login": "PeterAJansen",
"id": 3813268,
"node_id": "MDQ6VXNlcjM4MTMyNjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/3813268?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PeterAJansen",
"html_url": "https://github.com/PeterAJansen",
"followers_url": "https://api.github.com/users/PeterAJansen/followers",
"following_url": "https://api.github.com/users/PeterAJansen/following{/other_user}",
"gists_url": "https://api.github.com/users/PeterAJansen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PeterAJansen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PeterAJansen/subscriptions",
"organizations_url": "https://api.github.com/users/PeterAJansen/orgs",
"repos_url": "https://api.github.com/users/PeterAJansen/repos",
"events_url": "https://api.github.com/users/PeterAJansen/events{/privacy}",
"received_events_url": "https://api.github.com/users/PeterAJansen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"If you run into other issues please try this PR: https://github.com/huggingface/transformers/pull/9323\r\nwhich has lots of improvements. It just hasn't been merged since we are waiting for me I think to sort the whole MP/PP out before moving forward."
] | 1,611 | 1,611 | 1,611 | NONE | null | ## Environment info
- `transformers` version: 4.3.0.dev0
- Platform: Linux-5.4.0-62-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.9
- PyTorch version (GPU?): 1.8.0.dev20210120 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes -- 4x A100-SXM4-40GB
- Using distributed or parallel set-up in script?: Yes
### Who can help
@stas00 @alexorona @sgugger
## Related
Related to the discussion in #8771 ( https://github.com/huggingface/transformers/issues/8771#issuecomment-764069755 ) that suggests MP can be done in 4.3.0 just by calling model.parallelize() after loading. I made another issue rather than hijack that one that's about MP improvements in general.
## Information
Model I am using (Bert, XLNet ...): T5
The problem arises when using:
* [ ] my own modified scripts: (give details below)
Added one line to funetune_trainer.py after model is loaded ( model.parallelize(), see below)
```
+++ b/examples/seq2seq/finetune_trainer.py
@@ -215,6 +215,9 @@ def main():
# use task specific params
use_task_specific_params(model, data_args.task)
+ # PJ: Parallelize model
+ model.parallelize()
+
# set num_beams for evaluation
if data_args.eval_beams is None:
data_args.eval_beams = model.config.num_beams
```
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: Running the example on an official task/dataset (seq2seq)
## To reproduce
Steps to reproduce the behavior:
On 4.3.0-dev (tonight):
1. Fresh pull of transformers. Add change above ( model.parallelize() ).
2. Run runscript (below). Error appears to reproduce for any sized model (e.g. I'm using t5-11b, but also happens under t5-large).
```
python finetune_trainer.py \
--learning_rate=3e-5 \
--do_train --do_eval --do_predict \
--evaluation_strategy steps \
--predict_with_generate \
--n_val 1000 \
--data_dir xsum \
--output_dir=xsum_results \
--num_train_epochs 1 \
--model_name_or_path t5-large \
--fp16 \
"$@"
```
3. The error:
```
...
[INFO|modeling_utils.py:1152] 2021-01-21 00:52:03,923 >> All the weights of T5ForConditionalGeneration were initialized from the model checkpoint at t5-large.
If your task is similar to the task the model of the checkpoint was trained on, you can already use T5ForConditionalGeneration for predictions without further training.
01/21/2021 00:52:03 - INFO - utils - setting model.config to task specific params for summarization:
{'early_stopping': True, 'length_penalty': 2.0, 'max_length': 200, 'min_length': 30, 'no_repeat_ngram_size': 3, 'num_beams': 4, 'prefix': 'summarize: '}
01/21/2021 00:52:03 - INFO - utils - note: command line args may override some of these
[INFO|trainer.py:362] 2021-01-21 00:52:14,376 >> Using amp fp16 backend
01/21/2021 00:52:14 - INFO - __main__ - *** Train ***
[INFO|trainer.py:813] 2021-01-21 00:52:14,383 >> ***** Running training *****
[INFO|trainer.py:814] 2021-01-21 00:52:14,383 >> Num examples = 204016
[INFO|trainer.py:815] 2021-01-21 00:52:14,383 >> Num Epochs = 1
[INFO|trainer.py:816] 2021-01-21 00:52:14,383 >> Instantaneous batch size per device = 8
[INFO|trainer.py:817] 2021-01-21 00:52:14,383 >> Total train batch size (w. parallel, distributed & accumulation) = 8
[INFO|trainer.py:818] 2021-01-21 00:52:14,383 >> Gradient Accumulation steps = 1
[INFO|trainer.py:819] 2021-01-21 00:52:14,383 >> Total optimization steps = 25502
0%| | 0/25502 [00:00<?, ?it/s]Traceback (most recent call last):
File "finetune_trainer.py", line 370, in <module>
main()
File "finetune_trainer.py", line 301, in main
model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None
File "/home/pajansen/anaconda3/envs/transformers-4.1.1a/lib/python3.7/site-packages/transformers/trainer.py", line 910, in train
tr_loss += self.training_step(model, inputs)
File "/home/pajansen/anaconda3/envs/transformers-4.1.1a/lib/python3.7/site-packages/transformers/trainer.py", line 1272, in training_step
loss = self.compute_loss(model, inputs)
File "/home/pajansen/anaconda3/envs/transformers-4.1.1a/lib/python3.7/site-packages/transformers/trainer.py", line 1300, in compute_loss
outputs = model(**inputs)
File "/home/pajansen/anaconda3/envs/transformers-4.1.1a/lib/python3.7/site-packages/torch/nn/modules/module.py", line 873, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/pajansen/anaconda3/envs/transformers-4.1.1a/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py", line 1500, in forward
return_dict=return_dict,
File "/home/pajansen/anaconda3/envs/transformers-4.1.1a/lib/python3.7/site-packages/torch/nn/modules/module.py", line 873, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/pajansen/anaconda3/envs/transformers-4.1.1a/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py", line 938, in forward
head_mask = head_mask.to(hidden_states.device)
AttributeError: 'list' object has no attribute 'to'
0%| | 0/25502 [00:00<?, ?it/s]
```
4. It's worth noting that the behavior on 4.1.1 is different and it works (essentially the same change, but with the device map specified as per https://huggingface.co/transformers/model_doc/t5.html?highlight=parallel#transformers.T5EncoderModel.parallelize , and the runscript also has the --model_parallel flag).
- `transformers` version: 4.1.1
- Platform: Linux-5.4.0-62-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.9
- PyTorch version (GPU?): 1.8.0.dev20210120 (True)
Change:
```
+++ b/examples/seq2seq/finetune_trainer.py
@@ -231,6 +231,13 @@ def main():
# use task specific params
use_task_specific_params(model, data_args.task)
+ # PJ: Parrallelize
+ device_map = {0: [0, 1, 2],
+ 1: [3, 4, 5, 6, 7, 8, 9],
+ 2: [10, 11, 12, 13, 14, 15, 16],
+ 3: [17, 18, 19, 20, 21, 22, 23]}
+ model.parallelize(device_map)
+
# set num_beams for evaluation
if data_args.eval_beams is None:
data_args.eval_beams = model.config.num_beams
```
Runscript:
```
python finetune_trainer.py \
--learning_rate=3e-5 \
--fp16 \
--do_train \
--data_dir xsum \
--output_dir=xsum_results \
--num_train_epochs 1 \
--model_name_or_path t5-large \
--max_source_length 96 \
--max_target_length 96 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--model_parallel \
"$@"
```
Output (works fine):
```
01/21/2021 01:25:01 - INFO - __main__ - *** Train ***
[INFO|trainer.py:703] 2021-01-21 01:25:01,016 >> ***** Running training *****
[INFO|trainer.py:704] 2021-01-21 01:25:01,016 >> Num examples = 999
[INFO|trainer.py:705] 2021-01-21 01:25:01,016 >> Num Epochs = 1
[INFO|trainer.py:706] 2021-01-21 01:25:01,016 >> Instantaneous batch size per device = 1
[INFO|trainer.py:707] 2021-01-21 01:25:01,016 >> Total train batch size (w. parallel, distributed & accumulation) = 1
[INFO|trainer.py:708] 2021-01-21 01:25:01,016 >> Gradient Accumulation steps = 1
[INFO|trainer.py:709] 2021-01-21 01:25:01,017 >> Total optimization steps = 999
0%| | 0/999 [00:00<?, ?it/s]/home/pajansen/anaconda3/envs/transformers-4.1.1/lib/python3.7/site-packages/torch/optim/lr_scheduler.py:134: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
"https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)
{'loss': nan, 'learning_rate': 1.4984984984984986e-05, 'epoch': 0.5005005005005005}
50%|█████████████████████████████████████████████████████████████████████████████████████████████████▌ | 500/999 [02:25<02:20, 3.54it/s][INFO|trainer.py:1226] 2021-01-21 01:27:26,134 >> Saving model checkpoint to xsum-mini_results/checkpoint-500
[INFO|configuration_utils.py:289] 2021-01-21 01:27:26,138 >> Configuration saved in xsum-mini_results/checkpoint-500/config.json
[INFO|modeling_utils.py:814] 2021-01-21 01:27:29,444 >> Model weights saved in xsum-mini_results/checkpoint-500/pytorch_model.bin
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 999/999 [04:54<00:00, 3.30it/s][INFO|trainer.py:862] 2021-01-21 01:29:55,140 >>
Training completed. Do not forget to share your model on huggingface.co/models =)
{'epoch': 1.0}
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 999/999 [04:54<00:00, 3.40it/s]
[INFO|trainer.py:1226] 2021-01-21 01:29:55,141 >> Saving model checkpoint to xsum-mini_results
[INFO|configuration_utils.py:289] 2021-01-21 01:29:55,146 >> Configuration saved in xsum-mini_results/config.json
[INFO|modeling_utils.py:814] 2021-01-21 01:29:58,207 >> Model weights saved in xsum-mini_results/pytorch_model.bin
01/21/2021 01:29:58 - INFO - __main__ - ***** train metrics *****
01/21/2021 01:29:58 - INFO - __main__ - train_samples_per_second = -0.003
01/21/2021 01:29:58 - INFO - __main__ - train_runtime = 294.1311
01/21/2021 01:29:58 - INFO - __main__ - train_n_ojbs = -1
```
(Note, I substituted the xsum dataset above for a shorter version I made with /head/ to just use the first 1000 lines of each file, to see if it would finish to completion (without taking 15 hours for the full example dataset). It looks okay. It's worth noting that if the validation arguments are added:
```
--evaluation_strategy steps \
--predict_with_generate \
--n_val 1000 \
```
then 4.1.1 will die at the checkpoints (500 iterations) with "RuntimeError: Input, output and indices must be on the current device". (I don't fully appreciate that one -- I'm assuming it means train/eval has to be done separately with MP, which is entirely manageable. #9336 showed a similar error, but that person was using BART (which doesn't have MP in 4.1.1) instead of T5, so I don't think it's the same thing).
## Expected behavior
Model parallelism -- spreading large models across multiple GPUs.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9718/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9718/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9717 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9717/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9717/comments | https://api.github.com/repos/huggingface/transformers/issues/9717/events | https://github.com/huggingface/transformers/pull/9717 | 790,815,118 | MDExOlB1bGxSZXF1ZXN0NTU4OTY4MjQ0 | 9,717 | ConvBERT Model | {
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
},
{
"id": 2669577093,
"node_id": "MDU6TGFiZWwyNjY5NTc3MDkz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/PR%20for%20Model%20Addition",
"name": "PR for Model Addition",
"color": "5319e7",
"default": false,
"description": ""
}
] | closed | false | {
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"type": "User",
"site_admin": false
}
] | [
"> Also, I think you forgot to add the model to the README.md (I'll forget it all the time as well :D)\r\n\r\nOh and while you are at it, a short entry in the `model_summary` would be great too!"
] | 1,611 | 1,612 | 1,611 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9717/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9717/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9717",
"html_url": "https://github.com/huggingface/transformers/pull/9717",
"diff_url": "https://github.com/huggingface/transformers/pull/9717.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9717.patch",
"merged_at": 1611735610000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/9716 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9716/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9716/comments | https://api.github.com/repos/huggingface/transformers/issues/9716/events | https://github.com/huggingface/transformers/issues/9716 | 790,674,583 | MDU6SXNzdWU3OTA2NzQ1ODM= | 9,716 | CUDA out of memory error on Trainer hyperparameter_search | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"There's not enough information here to help you. Please provide env info, the model/tasks you are using, and a short text code snippet to reproduce the error. ",
"@patil-suraj I have edited the comment. Please help me out. Thanks.",
"@patil-suraj Same issue here\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,611 | 1,619 | 1,619 | CONTRIBUTOR | null | Hi,
I am using Colab to GridSearch on a dataset. The dataset has 7000 samples. I use 1/5 shard and batch_size is 1. No matter what I do, I get this error.
Env Related Info
-----------
tranformers (4.2.1)
datasets (1.2.1)
Model Used
----------------
[SpanBERT Large](https://huggingface.co/SpanBERT/spanbert-large-cased)
Snippets
------------
My dataset has 7934 train examples and 690 eval examples, with maximum number of tokens around 300 per example.
Tokenization Details:
```
max_length: 384
doc_stride: 128
```
```
def model_init():
return AutoModelForQuestionAnswering.from_pretrained(model_checkpoint_name, num_labels=2)
args = TrainingArguments(
output_dir = os.path.join(checkpoint_dir,'/grid_search/'),
logging_dir= os.path.join(runs_dir,'/grid_search/'),
evaluation_strategy='epoch',
per_device_train_batch_size=2,
per_device_eval_batch_size=2,
gradient_accumulation_steps=1,
learning_rate=2e-5,
weight_decay=0.01,
num_train_epochs=3.0,
lr_scheduler_type="linear",
warmup_steps=0,
logging_steps=200,
save_steps=200,
seed=42
)
grid_search_trainer = Trainer(
model_init=model_init,
args=args,
train_dataset=tokenized_train.shard(index=1,num_shards=5),
eval_dataset=tokenized_val.shard(index=1,num_shards=5),
data_collator=data_collator,
tokenizer=tokenizer
)
best_run = grid_search_trainer.hyperparameter_search(n_trials=10, direction="minimize")
```
Sometimes, this happens after the trainer is done with one epoch. Is the trainer initializing the model without deleting/clearing the previous one? Would that affect the GPU? How can I prevent this issue?

@patil-suraj | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9716/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9716/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9715 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9715/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9715/comments | https://api.github.com/repos/huggingface/transformers/issues/9715/events | https://github.com/huggingface/transformers/issues/9715 | 790,571,779 | MDU6SXNzdWU3OTA1NzE3Nzk= | 9,715 | Error when passing --line_by_line to run_mlm.py | {
"login": "mlpacheco",
"id": 2424080,
"node_id": "MDQ6VXNlcjI0MjQwODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2424080?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mlpacheco",
"html_url": "https://github.com/mlpacheco",
"followers_url": "https://api.github.com/users/mlpacheco/followers",
"following_url": "https://api.github.com/users/mlpacheco/following{/other_user}",
"gists_url": "https://api.github.com/users/mlpacheco/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mlpacheco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mlpacheco/subscriptions",
"organizations_url": "https://api.github.com/users/mlpacheco/orgs",
"repos_url": "https://api.github.com/users/mlpacheco/repos",
"events_url": "https://api.github.com/users/mlpacheco/events{/privacy}",
"received_events_url": "https://api.github.com/users/mlpacheco/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Update: this error was present when using Python 3.6.9, it disappeared when using Python 3.7.5\r\nI found the hint at: https://github.com/huggingface/transformers/issues/8212 ",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,611 | 1,614 | 1,614 | NONE | null | I am trying to run the `run_mlm.py` examples/language-modeling using my own data file. It works fine if I don't pass the `--line_by_line` parameter, but if I do, it breaks.
I can't figure this out using the error trace, can anyone give me a hand?
```
python3 run_mlm.py --model_name_or_path bert-base-uncased --train_file full_dataset_lm_train.txt --validation_file full_dataset_lm_dev.txt --do_train --do_eval --output_dir /tmp/moral_foundation_lm/ --line_by_line
```
```
Traceback (most recent call last):
File "run_mlm.py", line 446, in <module>
main()
File "run_mlm.py", line 322, in main
load_from_cache_file=not data_args.overwrite_cache,
File "/homes/pachecog/.local/lib/python3.6/site-packages/datasets/dataset_dict.py", line 303, in map
for k, dataset in self.items()
File "/homes/pachecog/.local/lib/python3.6/site-packages/datasets/dataset_dict.py", line 303, in <dictcomp>
for k, dataset in self.items()
File "/homes/pachecog/.local/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1260, in map
update_data=update_data,
File "/homes/pachecog/.local/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 157, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/homes/pachecog/.local/lib/python3.6/site-packages/datasets/fingerprint.py", line 158, in wrapper
self._fingerprint, transform, kwargs_for_fingerprint
File "/homes/pachecog/.local/lib/python3.6/site-packages/datasets/fingerprint.py", line 105, in update_fingerprint
hasher.update(transform_args[key])
File "/homes/pachecog/.local/lib/python3.6/site-packages/datasets/fingerprint.py", line 57, in update
self.m.update(self.hash(value).encode("utf-8"))
File "/homes/pachecog/.local/lib/python3.6/site-packages/datasets/fingerprint.py", line 53, in hash
return cls.hash_default(value)
File "/homes/pachecog/.local/lib/python3.6/site-packages/datasets/fingerprint.py", line 46, in hash_default
return cls.hash_bytes(dumps(value))
File "/homes/pachecog/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 389, in dumps
dump(obj, file)
File "/homes/pachecog/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 361, in dump
Pickler(file, recurse=True).dump(obj)
File "/usr/lib/python3.6/pickle.py", line 409, in dump
self.save(obj)
File "/usr/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/homes/pachecog/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 556, in save_function
obj=obj,
File "/usr/lib/python3.6/pickle.py", line 610, in save_reduce
save(args)
File "/usr/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python3.6/pickle.py", line 751, in save_tuple
save(element)
File "/usr/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python3.6/pickle.py", line 736, in save_tuple
save(element)
File "/usr/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/homes/pachecog/.local/lib/python3.6/site-packages/dill/_dill.py", line 1129, in save_cell
pickler.save_reduce(_create_cell, (f,), obj=obj)
File "/usr/lib/python3.6/pickle.py", line 610, in save_reduce
save(args)
File "/usr/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python3.6/pickle.py", line 736, in save_tuple
save(element)
File "/usr/lib/python3.6/pickle.py", line 521, in save
self.save_reduce(obj=obj, *rv)
File "/usr/lib/python3.6/pickle.py", line 605, in save_reduce
save(cls)
File "/usr/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/homes/pachecog/.local/lib/python3.6/site-packages/dill/_dill.py", line 1315, in save_type
obj.__bases__, _dict), obj=obj)
File "/usr/lib/python3.6/pickle.py", line 610, in save_reduce
save(args)
File "/usr/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python3.6/pickle.py", line 751, in save_tuple
save(element)
File "/usr/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/homes/pachecog/.local/lib/python3.6/site-packages/dill/_dill.py", line 902, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/usr/lib/python3.6/pickle.py", line 821, in save_dict
self._batch_setitems(obj.items())
File "/usr/lib/python3.6/pickle.py", line 847, in _batch_setitems
save(v)
File "/usr/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/homes/pachecog/.local/lib/python3.6/site-packages/dill/_dill.py", line 902, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/usr/lib/python3.6/pickle.py", line 821, in save_dict
self._batch_setitems(obj.items())
File "/usr/lib/python3.6/pickle.py", line 847, in _batch_setitems
save(v)
File "/usr/lib/python3.6/pickle.py", line 521, in save
self.save_reduce(obj=obj, *rv)
File "/usr/lib/python3.6/pickle.py", line 634, in save_reduce
save(state)
File "/usr/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python3.6/pickle.py", line 736, in save_tuple
save(element)
File "/usr/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/homes/pachecog/.local/lib/python3.6/site-packages/dill/_dill.py", line 902, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/usr/lib/python3.6/pickle.py", line 821, in save_dict
self._batch_setitems(obj.items())
File "/usr/lib/python3.6/pickle.py", line 847, in _batch_setitems
save(v)
File "/usr/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/homes/pachecog/.local/lib/python3.6/site-packages/dill/_dill.py", line 1148, in save_dictproxy
raise ReferenceError("%s does not reference a class __dict__" % obj)
ReferenceError: {'help': 'The name of the dataset to use (via the datasets library).'} does not reference a class __dict__
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9715/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9715/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9714 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9714/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9714/comments | https://api.github.com/repos/huggingface/transformers/issues/9714/events | https://github.com/huggingface/transformers/issues/9714 | 790,492,391 | MDU6SXNzdWU3OTA0OTIzOTE= | 9,714 | Slow BERT Tokenizer adds UNK when calling tokenize() | {
"login": "ethch18",
"id": 12580176,
"node_id": "MDQ6VXNlcjEyNTgwMTc2",
"avatar_url": "https://avatars.githubusercontent.com/u/12580176?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ethch18",
"html_url": "https://github.com/ethch18",
"followers_url": "https://api.github.com/users/ethch18/followers",
"following_url": "https://api.github.com/users/ethch18/following{/other_user}",
"gists_url": "https://api.github.com/users/ethch18/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ethch18/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ethch18/subscriptions",
"organizations_url": "https://api.github.com/users/ethch18/orgs",
"repos_url": "https://api.github.com/users/ethch18/repos",
"events_url": "https://api.github.com/users/ethch18/events{/privacy}",
"received_events_url": "https://api.github.com/users/ethch18/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I wonder why the docstring says that, I believe both the slow and fast tokenizers replace with the unknown token. @thomwolf do you remember if that wasn't the case when you wrote the docstring?",
"Hi @thomwolf, any thoughts on this?\r\n\r\nAlso @LysandreJik, do you know if there's a way to prevent the replacement of the unknown token, or at least to identify what string it replaced?",
"If it did return strings that it could not understand, then the model would crash (which is why the docstring is surprising, and should be changed imo), and why we don't have a flag that would allow that. We can work around it, however.\r\n\r\nThe BERT tokenizer uses a whitespace tokenizer, which means that the first step is to split the input sequence on whitespace, before trying to convert each piece (or each \"word\") to tokens. When it fails to do that, it replaces that piece with the unknown token, so we can be confident that the unknown tokens are always space delimited strings.\r\n\r\nTherefore, we can do the following:\r\n\r\n```py\r\ntext_with_unknown_words = \"Let's try it with some 🤗 emojis 🤗 every 🤗 where 🤗.\"\r\n\r\n# Strip it and split it on whitespace\r\nlist_of_space_separated_pieces = text_with_unknown_words.strip().split()\r\n\r\n# Let's try it with the BERT-base-cased tokenizer\r\nfrom transformers import BertTokenizer\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-cased\")\r\n\r\n# Now let's encode the list word by word, removing the special tokens for simplicity\r\nids = [tokenizer(piece, add_special_tokens=False)[\"input_ids\"] for piece in list_of_space_separated_pieces]\r\n# [[2421, 112, 188], [2222], [1122], [1114], [1199], [100], [9712, 1186, 3454, 1116], [100], [1451], [100], [1187], [100, 119]]\r\n# The tokenizer's unknown token is 100 and we can identify it easily here\r\n\r\n# Identify the tokens which are unknown according to the tokenizer's `unk_token_id`\r\nunk_indices = [i for i, encoded in enumerate(ids) if tokenizer.unk_token_id in encoded]\r\n\r\n# Retrieve the strings that were converted into unknown tokens\r\nunknown_strings = [piece for i, piece in enumerate(list_of_space_separated_pieces) if i in unk_indices]\r\n# ['🤗', '🤗', '🤗', '🤗.'] Victory!\r\n```\r\n\r\nLet me know if that helps.",
"Thanks @LysandreJik -- this is really helpful! I ended up adding a couple more steps from BERT's basic tokenizer, since it also splits on punctuation, etc.\r\n\r\nNot sure if I should close this issue now, or if it should stay open until the docstring issue is figured out?",
"Yes, the docstring should be updated. Do you want to take a stab at contributing? :)",
"Sure -- would it just involve fixing the docstrings that say this in the python code and then building the docs as specified [here](https://github.com/huggingface/transformers/blob/master/docs/README.md)? Or is there more to it?",
"Actually just fixing the docstrings and comitting! As soon as we merge the PR the documentation of the `master` branch will be updated."
] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | Hi! I've run into an inconsistency between the base tokenizer docstring and the slow BERT tokenizer. Specifically, when calling `tokenizer.encode()`, the `[UNK]` token is inserted for unknown tokens, even though the [docstring](https://github.com/huggingface/transformers/blob/v4.0.1/src/transformers/tokenization_utils_base.py#L2026) says that such tokens should be unchanged. Here's how I'm calling the tokenizer:
```python
tokenizer = BertTokenizer.from_pretrained(
save_dir, do_lower_case=False, strip_accents=False, tokenize_chinese_chars=True
)
sentence = "RINDIRIZZA Ġwann Marija Vianney"
print(tokenizer.tokenize(sentence))
```
and the output is
```
['RI', '##ND', '##IR', '##I', '##Z', '##ZA', '[UNK]', 'Marija', 'Via', '##nne', '##y']
```
(notice the `[UNK]` in the middle).
So, it seems that this particular slow tokenizer isn't following the docstring. Is this expected?
If not, is there a way to prevent replacement of unknown tokens? I wanted to use the slow BERT tokenizer over the fast one for exactly this reason, and it'd be great if there's a way to make this work.
I'm using `transformers` v4.0.1, but it looks like this docstring hasn't changed between [`master`](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils_base.py#L2027) and `4.0.1`.
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9714/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9714/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9713 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9713/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9713/comments | https://api.github.com/repos/huggingface/transformers/issues/9713/events | https://github.com/huggingface/transformers/pull/9713 | 790,483,134 | MDExOlB1bGxSZXF1ZXN0NTU4NjczNTg5 | 9,713 | Fix memory regression in Seq2Seq example | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I agree with Patrick here, the collator was added to encode the text and to prepare the `decoder_input_ids` and `labels`, replace pad with 100 etc. Now we could encode and prepare `labels` in datasets.map(...) so collator won't be needed anymore. \r\n\r\nThe only thing we need IMO is to be able to prepare `decoder_input_ids` outside of the model for label smoothing as Sylvain said. Could we maybe make the add `shift_right` method to every s2s model to able to prepare the `decoder_input_ids` outside of the model ?",
"Note that this is fixing the old script with the old data collator. The new one will be fixed with the proper fix (once we agree on it and there seems to be a consensus on having a model with a `shift_right` method) but is still necessary to do dynamic padding. The `Dataset.map` method is very nice for static things but when you want to pad to the length of the biggest sample in the batch, you need a special data collator, especially if it has to pad special keys like `\"labels\"`, `\"decoder_input_ids\"`...\r\n\r\nThe old `Seq2SeqDataCollator` in the utils file will be removed in a couple of weeks when the new seq2seq example is perfectly running, so I think it's fine to merge the quick hack in the meantime :-)"
] | 1,611 | 1,611 | 1,611 | COLLABORATOR | null | # What does this PR do?
This PR fixes the memory regression introduced when putting the `Seq2SeqTrainer` inside the main library. The root of the memory regression comes from the fact that when doing label smoothing, we ended up computing the log softmax of the logits twice, once in the cross entropy loss, and a second time inside the label smoother.
To fix this, the loss computation needs to be entirely done inside the label smoother, so the labels must be extracted from the batch before being passed to the model. As a result, the `decoder_input_ids` must be computed in the `Seq2SeqDataCollator` and not the model for this to work. I've just reverted the code from #9343, I don't know if it actually matches what happens inside the models. Maybe we should have a method to compute those `decoder_input_ids` accessible from inside of those models, or a flag to tell them whether to compute the loss or not (in this case, computing the loss will not only be slower, it will also trigger back the memory regression).
The same fix will need to be applied to the `Seq2SeqDataCollator` now inside the library as well as the new `run_seq2seq` script, but I will do it once we have agreed on a long-term solution for the decoder input ids above.
Fixes #9261 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9713/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9713/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9713",
"html_url": "https://github.com/huggingface/transformers/pull/9713",
"diff_url": "https://github.com/huggingface/transformers/pull/9713.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9713.patch",
"merged_at": 1611248747000
} |
https://api.github.com/repos/huggingface/transformers/issues/9712 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9712/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9712/comments | https://api.github.com/repos/huggingface/transformers/issues/9712/events | https://github.com/huggingface/transformers/pull/9712 | 790,445,971 | MDExOlB1bGxSZXF1ZXN0NTU4NjM5NDcy | 9,712 | [trainer] no --deepspeed and --sharded_ddp together | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for fixing!"
] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | This PR fixes an invalid if branch, which fails to detect a concurrent use of `--deepspeed` and `--sharded_ddp`, which should never be used together.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9712/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9712/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9712",
"html_url": "https://github.com/huggingface/transformers/pull/9712",
"diff_url": "https://github.com/huggingface/transformers/pull/9712.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9712.patch",
"merged_at": 1611190222000
} |
https://api.github.com/repos/huggingface/transformers/issues/9711 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9711/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9711/comments | https://api.github.com/repos/huggingface/transformers/issues/9711/events | https://github.com/huggingface/transformers/issues/9711 | 790,437,584 | MDU6SXNzdWU3OTA0Mzc1ODQ= | 9,711 | Add support for RemBERT | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
},
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | closed | false | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
}
] | [
"Decided it would be easier for us to take care of this since we plan to directly release the model checkpoint in huggingface.\r\n\r\nStarted working on it over the week-end, will share PR once it is more polished. ",
"This is great news @Iwontbecreative! Let us know if you need help."
] | 1,611 | 1,627 | 1,627 | COLLABORATOR | null | # 🌟 New model addition
## Model description
Hi,
I just found this really interesting upcoming ICLR 2021 paper: "Rethinking Embedding Coupling in Pre-trained Language Models":
> We re-evaluate the standard practice of sharing weights between input and output embeddings in state-of-the-art pre-trained language models. We show that decoupled embeddings provide increased modeling flexibility, allowing us to significantly improve the efficiency of parameter allocation in the input embedding of multilingual models. By reallocating the input embedding parameters in the Transformer layers, we achieve dramatically better performance on standard natural language understanding tasks with the same number of parameters during fine-tuning. We also show that allocating additional capacity to the output embedding provides benefits to the model that persist through the fine-tuning stage even though the output embedding is discarded after pre-training. Our analysis shows that larger output embeddings prevent the model's last layers from overspecializing to the pre-training task and encourage Transformer representations to be more general and more transferable to other tasks and languages. Harnessing these findings, we are able to train models that achieve strong performance on the XTREME benchmark without increasing the number of parameters at the fine-tuning stage.
Paper can be found [here](https://openreview.net/forum?id=xpFFI_NtgpW).
Thus, the authors propose a new *Rebalanced mBERT (**RemBERT**) model* that outperforms XLM-R. An integration into Transformers would be awesome!
I would really like to help with the integration into Transformers, as soon as the model is out!
## Open source status
* [ ] the model implementation is available: authors plan to release model implementation
* [ ] the model weights are available: authors plan to release model checkpoint
* [ ] who are the authors: @hwchung27, @Iwontbecreative, Henry Tsai, Melvin Johnson and @sebastianruder
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9711/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/9711/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9710 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9710/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9710/comments | https://api.github.com/repos/huggingface/transformers/issues/9710/events | https://github.com/huggingface/transformers/issues/9710 | 790,355,407 | MDU6SXNzdWU3OTAzNTU0MDc= | 9,710 | Let Trainer provide the device to perform training | {
"login": "earlyr1",
"id": 31624290,
"node_id": "MDQ6VXNlcjMxNjI0Mjkw",
"avatar_url": "https://avatars.githubusercontent.com/u/31624290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/earlyr1",
"html_url": "https://github.com/earlyr1",
"followers_url": "https://api.github.com/users/earlyr1/followers",
"following_url": "https://api.github.com/users/earlyr1/following{/other_user}",
"gists_url": "https://api.github.com/users/earlyr1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/earlyr1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/earlyr1/subscriptions",
"organizations_url": "https://api.github.com/users/earlyr1/orgs",
"repos_url": "https://api.github.com/users/earlyr1/repos",
"events_url": "https://api.github.com/users/earlyr1/events{/privacy}",
"received_events_url": "https://api.github.com/users/earlyr1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | NONE | null | # 🚀 Feature request
Training_args object [chooses](https://github.com/huggingface/transformers/blob/7acfa95afb8194f8f9c1f4d2c6028224dbed35a2/src/transformers/training_args.py#L477) the training device by itself(cuda:0 by default). I request a possibility for a user to be able to choose it :)
## Motivation
Imagine a situation when we have a cluster with several gpus and cuda:0 memory is full(I have it right now :)) So user cannot use Trainer object for training.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9710/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9710/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9709 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9709/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9709/comments | https://api.github.com/repos/huggingface/transformers/issues/9709/events | https://github.com/huggingface/transformers/issues/9709 | 790,320,177 | MDU6SXNzdWU3OTAzMjAxNzc= | 9,709 | DeepSpeed: Exits with CUDA runtime error on A100 (requires recompiling DeepSpeed for NVIDIA 8.0 Arch) | {
"login": "PeterAJansen",
"id": 3813268,
"node_id": "MDQ6VXNlcjM4MTMyNjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/3813268?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PeterAJansen",
"html_url": "https://github.com/PeterAJansen",
"followers_url": "https://api.github.com/users/PeterAJansen/followers",
"following_url": "https://api.github.com/users/PeterAJansen/following{/other_user}",
"gists_url": "https://api.github.com/users/PeterAJansen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PeterAJansen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PeterAJansen/subscriptions",
"organizations_url": "https://api.github.com/users/PeterAJansen/orgs",
"repos_url": "https://api.github.com/users/PeterAJansen/repos",
"events_url": "https://api.github.com/users/PeterAJansen/events{/privacy}",
"received_events_url": "https://api.github.com/users/PeterAJansen/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Pinging @stas00",
"Heh, actually I wrote this section: https://www.deepspeed.ai/tutorials/advanced-install/#building-for-the-correct-architectures and the autodetector, since I originally had the same issue.\r\n\r\nThis problem is also partially in pytorch - which is now fixed too in pytorch-nightly.\r\n\r\n `TORCH_CUDA_ARCH_LIST` is there if you say want to use the binary build on another machine or want to optimize it for whatever reason. e.g. I build it with:\r\n```\r\nTORCH_CUDA_ARCH_LIST=\"6.1;8.6\" DS_BUILD_OPS=1 pip install --no-cache -v --disable-pip-version-check -e .\r\n```\r\nbecause I have 1070 and 3090 cards.\r\n\r\nI'm glad you found a way to solve it.\r\n\r\nNow, this is a purely DeepSpeed issue and has nothing to do with transformers, other than perhaps a documentation issue.\r\n\r\nI'm all ears at how perhaps `transformers` can improve the doc on our side to help the users find a solution quickly.\r\n\r\n1. Probably should recommend to install from source\r\n2. but then when we bail on missing `deepspeed` we say do `pip install deepspeed` - do you think we should change that to:\r\n> `pip install deepspeed` or if it doesn't work install from source?\r\n\r\nThe thing is `pip install deepspeed` is installing from source, but I think it perhaps isn't using the same build script? So should we say:\r\n> `pip install deepspeed` or if it doesn't work install from https://github.com/microsoft/deepspeed? \r\n\r\nor may be easier to just say: \r\n> install from https://github.com/microsoft/deepspeed?\r\n\r\nWhat happens if you install with:\r\n```\r\nDS_BUILD_OPS=1 pip install deepspeed\r\n```\r\n\r\nPerhaps your issue is JIT/PTX which happens if you don't do the above - i.e. the binary build gets postpone till run time. `DS_BUILD_OPS=1` forces the binary build.\r\n\r\nIn any case let's discuss this over at DeepSpeed Issues - @PeterAJansen, would you please open an issue there because only you can report/reproduce the specific error - should they fix the pip build. and tag me?\r\n\r\nBTW, fairscale has its own issues with `pip install fairscale` - I also have to build from the the repo, because I am forced to use pytorch-nightly due to rtx-30* and it won't build at all via `pip` directly.\r\n\r\nso whatever we decide we should do the same for `fairscale`.\r\n\r\nThank you!",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,611 | 1,614 | 1,614 | NONE | null | ## Environment info
- `transformers` version: 4.3.0 (unofficial, off current main branch)
- Platform: Linux
- Python version: 3.7
- PyTorch version (GPU?): 1.7.1
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
## Information
Model I am using (Bert, XLNet ...): T-5
The problem arises when using:
* [ ] the official example scripts: examples/seq2seq/finetune_trainer.py
## Issue
In the hopes this saves others some time since it took a while for me to fix: When running the new DeepSpeed mode in Transformers 4.3.0 on an A100 GPU, it will exit with a runtime error:
RuntimeError: CUDA error: no kernel image is available for execution on the device
For me, this was due to installing DeepSpeed from pip rather than source. The A100 architecture appears not to be (as of this writing) installed in the default. If you install from source as described in this post ( https://www.deepspeed.ai/tutorials/advanced-install/ ), the error goes away. The post suggests selecting the architecture using the TORCH_CUDA_ARCH_LIST environment variable, but I found just using the install.sh script (which I am assuming auto-detects the architecture of your GPU) worked more successfully.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9709/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9709/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9708 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9708/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9708/comments | https://api.github.com/repos/huggingface/transformers/issues/9708/events | https://github.com/huggingface/transformers/pull/9708 | 790,264,576 | MDExOlB1bGxSZXF1ZXN0NTU4NDc5MDIx | 9,708 | fix typo | {
"login": "Muennighoff",
"id": 62820084,
"node_id": "MDQ6VXNlcjYyODIwMDg0",
"avatar_url": "https://avatars.githubusercontent.com/u/62820084?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Muennighoff",
"html_url": "https://github.com/Muennighoff",
"followers_url": "https://api.github.com/users/Muennighoff/followers",
"following_url": "https://api.github.com/users/Muennighoff/following{/other_user}",
"gists_url": "https://api.github.com/users/Muennighoff/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Muennighoff/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Muennighoff/subscriptions",
"organizations_url": "https://api.github.com/users/Muennighoff/orgs",
"repos_url": "https://api.github.com/users/Muennighoff/repos",
"events_url": "https://api.github.com/users/Muennighoff/events{/privacy}",
"received_events_url": "https://api.github.com/users/Muennighoff/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | # What does this PR do?
fix typo
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9708/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9708/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9708",
"html_url": "https://github.com/huggingface/transformers/pull/9708",
"diff_url": "https://github.com/huggingface/transformers/pull/9708.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9708.patch",
"merged_at": 1611217262000
} |
https://api.github.com/repos/huggingface/transformers/issues/9707 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9707/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9707/comments | https://api.github.com/repos/huggingface/transformers/issues/9707/events | https://github.com/huggingface/transformers/pull/9707 | 790,232,980 | MDExOlB1bGxSZXF1ZXN0NTU4NDUyNDc5 | 9,707 | Allow text generation for ProphetNetForCausalLM | {
"login": "guillaume-be",
"id": 27071604,
"node_id": "MDQ6VXNlcjI3MDcxNjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/27071604?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guillaume-be",
"html_url": "https://github.com/guillaume-be",
"followers_url": "https://api.github.com/users/guillaume-be/followers",
"following_url": "https://api.github.com/users/guillaume-be/following{/other_user}",
"gists_url": "https://api.github.com/users/guillaume-be/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guillaume-be/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guillaume-be/subscriptions",
"organizations_url": "https://api.github.com/users/guillaume-be/orgs",
"repos_url": "https://api.github.com/users/guillaume-be/repos",
"events_url": "https://api.github.com/users/guillaume-be/events{/privacy}",
"received_events_url": "https://api.github.com/users/guillaume-be/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks a lot for fixing it @guillaume-be "
] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | # What does this PR do?
The configuration for ProphetNetForCausalLM is overwritten at initialization to ensure that it is used as a decoder (and not as an encoder_decoder) for text generation.
The initialization of the parent class for ProphetNetForCausalLM is done before this overwrite, causing the `model.config.is_encoder_decoder` to remain possibly True. This leads to an error if the generate method of the model is later called as the non-existing method `get_encoder` is called.
Fixes https://github.com/huggingface/transformers/issues/9702
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9707/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9707/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9707",
"html_url": "https://github.com/huggingface/transformers/pull/9707",
"diff_url": "https://github.com/huggingface/transformers/pull/9707.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9707.patch",
"merged_at": 1611224018000
} |
https://api.github.com/repos/huggingface/transformers/issues/9706 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9706/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9706/comments | https://api.github.com/repos/huggingface/transformers/issues/9706/events | https://github.com/huggingface/transformers/pull/9706 | 790,163,401 | MDExOlB1bGxSZXF1ZXN0NTU4Mzk0NjUz | 9,706 | [PR/Issue templates] normalize, group, sort + add myself for deepspeed | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Once the PR template is complete and everybody is happy I will sync with the Issue template. So please only review the former if you're just joining in.",
"should we add bullets? As in:\r\n\r\n```\r\n\r\nModels:\r\n\r\n- albert, bert, xlm: @LysandreJik\r\n- blenderbot, bart, marian, pegasus, encoderdecoder, longformer, reformer, t5, transfoxl, xlnet: @patrickvonplaten\r\n- fsmt: @stas00\r\n- funnel: @sgugger\r\n- gpt2: @patrickvonplaten, @LysandreJik\r\n- rag: @patrickvonplaten, @lhoestq\r\n- tensorflow: @jplu\r\n\r\nLibrary:\r\n\r\n- benchmarks: @patrickvonplaten\r\n- deepspeed: @stas00\r\n- ray/raytune: @richardliaw, @amogkam\r\n- text generation: @patrickvonplaten\r\n- tokenizers: @n1t0\r\n- trainer: @sgugger\r\n\r\nDocumentation: @sgugger\r\n\r\nHF projects:\r\n\r\n- nlp datasets: [different repo](https://github.com/huggingface/nlp)\r\n- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)\r\n\r\nExamples:\r\n\r\n- maintained examples (not research project or legacy): @sgugger, @patil-suraj\r\n- research_projects/bert-loses-patience: @JetRunner\r\n- research_projects/distillation: @VictorSanh\r\n```",
"I like bullets.",
"someone to tag for ONNX issues? @mfuntowicz?"
] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | This PR:
* case-normalizes, groups and sorts the tagging entries
* removes one duplicate
* adds myself for deepspeed
* adds/removes/moves others based on their suggestions through this PR
@LysandreJik, @sgugger, @patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9706/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9706/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9706",
"html_url": "https://github.com/huggingface/transformers/pull/9706",
"diff_url": "https://github.com/huggingface/transformers/pull/9706.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9706.patch",
"merged_at": 1611637741000
} |
https://api.github.com/repos/huggingface/transformers/issues/9705 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9705/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9705/comments | https://api.github.com/repos/huggingface/transformers/issues/9705/events | https://github.com/huggingface/transformers/pull/9705 | 790,153,146 | MDExOlB1bGxSZXF1ZXN0NTU4Mzg2MDg2 | 9,705 | [deepspeed] fix the backward for deepspeed | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for fixing!"
] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | This PR fixes a bug in my deepspeed integration - `backward` needs to be called on the deepspeed object.
@sgugger
Fixes: https://github.com/huggingface/transformers/issues/9694
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9705/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9705/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9705",
"html_url": "https://github.com/huggingface/transformers/pull/9705",
"diff_url": "https://github.com/huggingface/transformers/pull/9705.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9705.patch",
"merged_at": 1611162428000
} |
https://api.github.com/repos/huggingface/transformers/issues/9704 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9704/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9704/comments | https://api.github.com/repos/huggingface/transformers/issues/9704/events | https://github.com/huggingface/transformers/issues/9704 | 790,152,367 | MDU6SXNzdWU3OTAxNTIzNjc= | 9,704 | ValueError("The training dataset must have an asserted cardinality") when running run_tf_ner.py | {
"login": "Xuanfang1121",
"id": 33194029,
"node_id": "MDQ6VXNlcjMzMTk0MDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/33194029?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Xuanfang1121",
"html_url": "https://github.com/Xuanfang1121",
"followers_url": "https://api.github.com/users/Xuanfang1121/followers",
"following_url": "https://api.github.com/users/Xuanfang1121/following{/other_user}",
"gists_url": "https://api.github.com/users/Xuanfang1121/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Xuanfang1121/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Xuanfang1121/subscriptions",
"organizations_url": "https://api.github.com/users/Xuanfang1121/orgs",
"repos_url": "https://api.github.com/users/Xuanfang1121/repos",
"events_url": "https://api.github.com/users/Xuanfang1121/events{/privacy}",
"received_events_url": "https://api.github.com/users/Xuanfang1121/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Maybe @jplu has an idea!",
"Hello!\r\n\r\nThis error is always raised by the TFTrainer when your dataset has not a cardinality attached.\r\n\r\nCan you give me the version of the `run_tf_ner.py` you are using please?",
"The run_tf_ner.py I used was downloaded from this https://github.com/huggingface/transformers/tree/master/examples/token-classification, transformer version is 4.2.0, tensorflow == 2.4.0\r\n @jplu ",
"Are you sure this is the exact version or not from another commit? Because I see a cardinality assigned in the current script. Even thought the script is not working since 4.2.0 but for a diffeerent reason.",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,611 | 1,614 | 1,614 | NONE | null | ## Environment info
- `transformers` version: 4.2.0
- Platform: linux
- Python version: python3.6
- PyTorch version (GPU?): 1.7.1 gpu
- Tensorflow version (GPU?):2.4.0
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?:no
### Who can help
@stefan-it
## Information
Model I am using (Bert, XLNet ...): bert-base-multilingual-cased
The problem arises when using:
- [yes ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name) ner GermEval 2014
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Created a new conda environment using conda env -n xftf2 python=3.6
2. pip insrall transformer==4.2.0 tensorflow==2.4 torch==1.7.1
3. Prepare the data set(train.txt, test.txt, dev.txt) according to the README under the folder token-classification, run run_tf_ner.py
setting from_pt=True, with the following parameters:
```
--data_dir ./data \
--labels ./data/labels.txt \
--model_name_or_path bert-base-multilingual-cased \
--output_dir ./output \
--max_seq_length 128 \
--num_train_epochs 4\
--per_device_train_batch_size 32 \
--save_steps 500 \
--seed 100 \
--do_train \
--do_eval \
--do_predict
```
Here is the stack trace:
```
01/21/2021 00:12:18 - INFO - utils_ner - *** Example ***
01/21/2021 00:12:18 - INFO - utils_ner - guid: dev-5
01/21/2021 00:12:18 - INFO - utils_ner - tokens: [CLS] Dara ##us entwickelte sich im Rok ##oko die Sitt ##e des gemeinsamen Wein ##ens im Theater , das die Stand ##es ##grenze ##n innerhalb des Publikum ##s über ##brücken sollte . [SEP]
01/21/2021 00:12:18 - INFO - utils_ner - input_ids: 101 95621 10251 28069 10372 10211 51588 20954 10128 105987 10112 10139 58090 90462 12457 10211 16223 117 10242 10128 15883 10171 58433 10115 21103 10139 63332 10107 10848 99765 17799 119 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
01/21/2021 00:12:18 - INFO - utils_ner - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
01/21/2021 00:12:18 - INFO - utils_ner - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
01/21/2021 00:12:18 - INFO - utils_ner - label_ids: -1 24 -1 24 24 24 6 -1 24 24 -1 24 24 24 -1 24 24 24 24 24 24 -1 -1 -1 24 24 24 -1 24 -1 24 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
Traceback (most recent call last):
File "run_tf_ner.py", line 299, in <module>
main()
File "run_tf_ner.py", line 231, in main
trainer.train()
File "/.conda/envs/xftf2/lib/python3.6/site-packages/transformers/trainer_tf.py", line 457, in train
train_ds = self.get_train_tfdataset()
File "/.conda/envs/xftf2/lib/python3.6/site-packages/transformers/trainer_tf.py", line 141, in get_train_tfdataset
raise ValueError("The training dataset must have an asserted cardinality")
ValueError: The training dataset must have an asserted cardinality
```
## Expected behavior
In such a case, is there any tips to deal with it?I really appreciate any help you can provide.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9704/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9704/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9703 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9703/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9703/comments | https://api.github.com/repos/huggingface/transformers/issues/9703/events | https://github.com/huggingface/transformers/pull/9703 | 790,148,629 | MDExOlB1bGxSZXF1ZXN0NTU4MzgyMjg1 | 9,703 | Fix WAND_DISABLED test | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | COLLABORATOR | null | # What does this PR do?
As reported in #9699, the test for the WAND_DISABLED environment variable is not working right now. This PR fixes that.
Fixes #9699
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9703/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9703/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9703",
"html_url": "https://github.com/huggingface/transformers/pull/9703",
"diff_url": "https://github.com/huggingface/transformers/pull/9703.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9703.patch",
"merged_at": 1611163824000
} |
https://api.github.com/repos/huggingface/transformers/issues/9702 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9702/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9702/comments | https://api.github.com/repos/huggingface/transformers/issues/9702/events | https://github.com/huggingface/transformers/issues/9702 | 790,099,554 | MDU6SXNzdWU3OTAwOTk1NTQ= | 9,702 | ProphetNetForCausalLM text generation fails | {
"login": "guillaume-be",
"id": 27071604,
"node_id": "MDQ6VXNlcjI3MDcxNjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/27071604?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guillaume-be",
"html_url": "https://github.com/guillaume-be",
"followers_url": "https://api.github.com/users/guillaume-be/followers",
"following_url": "https://api.github.com/users/guillaume-be/following{/other_user}",
"gists_url": "https://api.github.com/users/guillaume-be/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guillaume-be/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guillaume-be/subscriptions",
"organizations_url": "https://api.github.com/users/guillaume-be/orgs",
"repos_url": "https://api.github.com/users/guillaume-be/repos",
"events_url": "https://api.github.com/users/guillaume-be/events{/privacy}",
"received_events_url": "https://api.github.com/users/guillaume-be/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You're totally right @gui11aume! Thanks for posting this issue - it would be great if you could open a PR to fix it. The checkpoint I uploaded won't work well because I just took the decoder part of the encoder-decoder model and removed all cross-attention layer. The model would have to be fine-tuned to work correctly. The main motivation to add `ProphetNetForCausalLM` however was to enable things like `Longformer2ProphetNet` as described here: https://github.com/huggingface/transformers/pull/9033"
] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: latest master (4.3.0.dev0)
- Platform: win64
- Python version: 3.7
- PyTorch version (GPU?): 1.7
- Tensorflow version (GPU?): N/A
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@patrickvonplaten
## Information
Model I am using: ProphetNet
The `ProphetNetForCausalLM` defined at https://github.com/huggingface/transformers/blob/88583d4958ae4cb08a4cc85fc0eb3aa02e6b68af/src/transformers/models/prophetnet/modeling_prophetnet.py#L1884 overwrites the `is_encoder_decoder` flag to a value of False to ensure the mode is used as a decoder only, regardless of what is given in the configuration file.
However, the initialization of the parent class is done before this overwrite, causing the `model.config.is_encoder_decoder` to remain possibly `True`. This leads to an error if the `generate` method of the model is later called as the non-existign method `get_encoder` is called:
```python
AttributeError: 'ProphetNetForCausalLM' object has no attribute 'get_encoder'
```
The script below allows reproducing:
```python
from transformers import ProphetNetTokenizer, ProphetNetForCausalLM
tokenizer = ProphetNetTokenizer.from_pretrained('microsoft/prophetnet-large-uncased')
model = ProphetNetForCausalLM.from_pretrained('patrickvonplaten/prophetnet-decoder-clm-large-uncased').cuda()
model = model.eval()
input_sentences = ["It was a very nice and sunny"]
inputs = tokenizer(input_sentences, return_tensors='pt')
# Generate text
summary_ids = model.generate(inputs['input_ids'].cuda(),
num_beams=4,
temperature=1.0,
top_k=50,
top_p=1.0,
repetition_penalty=1.0,
min_length=10,
max_length=32,
no_repeat_ngram_size=3,
do_sample=False,
early_stopping=True)
model_output = tokenizer.batch_decode(summary_ids, skip_special_tokens=True)
```
## Step to fix it
The call to `super().__init__(config)` in the initialization method should be moved from modeling_prophetnet.py#L1886 to modeling_prophetnet.py#L1890 (after the configuration object was modified). If you agree I could submit a small PR with the same, I tested locally and the model does not crash.
As a side note, After the fix, the generation quality remains very poor, is there a pretrained snapshot for ProphetNet that can actually be used for causal generation? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9702/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9702/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9701 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9701/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9701/comments | https://api.github.com/repos/huggingface/transformers/issues/9701/events | https://github.com/huggingface/transformers/issues/9701 | 789,993,047 | MDU6SXNzdWU3ODk5OTMwNDc= | 9,701 | how to run pegasus finetune on multiple gpus | {
"login": "cheop-byeon",
"id": 55306172,
"node_id": "MDQ6VXNlcjU1MzA2MTcy",
"avatar_url": "https://avatars.githubusercontent.com/u/55306172?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cheop-byeon",
"html_url": "https://github.com/cheop-byeon",
"followers_url": "https://api.github.com/users/cheop-byeon/followers",
"following_url": "https://api.github.com/users/cheop-byeon/following{/other_user}",
"gists_url": "https://api.github.com/users/cheop-byeon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cheop-byeon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cheop-byeon/subscriptions",
"organizations_url": "https://api.github.com/users/cheop-byeon/orgs",
"repos_url": "https://api.github.com/users/cheop-byeon/repos",
"events_url": "https://api.github.com/users/cheop-byeon/events{/privacy}",
"received_events_url": "https://api.github.com/users/cheop-byeon/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Please use the [forums](https://discuss.huggingface.co/) to ask questions like this. Also note that there is no `finetune` script in the example folder anymore, so you should probably be using `finetune_trainer` or `run_seq2seq`."
] | 1,611 | 1,611 | 1,611 | NONE | null | ## Environment Information
- transformers version: 4.2.0dev0
- Platform: Linux-3.10.0-1062.18.1.el7.x86_64-x86_64-with-centos-7.7.1908-Core
- Python version: 3.7.3
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): 2.4.0 (False)
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
## Who might help
@sgugger
@patrickvonplaten
@patil-suraj
## Information
The fine-tune process is taking really long time, so I want to do it parallel on multiple gpus.
The problem arises when using:
I do not have found the instructions for training on multiple gpus for the arguments, are there configurations for something like nodes, etc. or should I implement it in my own script?
## To reproduce
```
python finetune.py \
--gpus 0 \
--learning_rate=1e-4 \
--do_train \
--do_predict \
--n_val 1000 \
--val_check_interval 0.25 \
--max_source_length 512 --max_target_length 56 \
--freeze_embeds --label_smoothing 0.1 --adafactor --task summarization_xsum \
--model_name_or_path google/pegasus-xsum \
--output_dir=xsum_results \
--data_dir xsum \
--tokenizer_name google/pegasus-large \
"$@"
```
and which of the belowings are correct? I saw both in other posts:
--model_name_or_path google/pegasus-xsum
--tokenizer_name google/pegasus-large \
or
--model_name_or_path google/pegasus-large
--tokenizer_name google/pegasus-xum \
I think it should be the second one but I am not sure.
## Expected behavior
1. Enable the finetune of pegasus model on multiple gpus.
2. Inject the correct arguments. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9701/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9701/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9700 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9700/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9700/comments | https://api.github.com/repos/huggingface/transformers/issues/9700/events | https://github.com/huggingface/transformers/issues/9700 | 789,978,878 | MDU6SXNzdWU3ODk5Nzg4Nzg= | 9,700 | NAN return from F.softmax function in pytorch implementation of BART self-attention | {
"login": "KaiQiangSong",
"id": 9112038,
"node_id": "MDQ6VXNlcjkxMTIwMzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9112038?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KaiQiangSong",
"html_url": "https://github.com/KaiQiangSong",
"followers_url": "https://api.github.com/users/KaiQiangSong/followers",
"following_url": "https://api.github.com/users/KaiQiangSong/following{/other_user}",
"gists_url": "https://api.github.com/users/KaiQiangSong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KaiQiangSong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KaiQiangSong/subscriptions",
"organizations_url": "https://api.github.com/users/KaiQiangSong/orgs",
"repos_url": "https://api.github.com/users/KaiQiangSong/repos",
"events_url": "https://api.github.com/users/KaiQiangSong/events{/privacy}",
"received_events_url": "https://api.github.com/users/KaiQiangSong/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It may cause similar issues in other models and other versions of the same model as well.",
"HI @KaiQiangSong \r\n\r\nWe haven't yet observed `NaN`s with BART specifically, could you post a code snippet where the model returns `NaN` so we could take a look ?",
"> HI @KaiQiangSong\r\n> \r\n> We haven't yet observed `NaN`s with BART specifically, could you post a code snippet where the model returns `NaN` so we could take a look ?\r\n\r\nSorry that, I couldn't publish my code now due to it is unpublished research.\r\nI've fixed the issue myself with changing the mask_fill of float(\"-inf\") to -1e5 (for supporting AMP as well).\r\nJust post this issue here to let you know there might be a potential issue.",
"I have the same exact problem.\r\n\r\nIll try with the -1e5 trick and see if it helps me too.\r\n\r\nThanks a lot!",
"> I have the same exact problem.\r\n> \r\n> Ill try with the -1e5 trick and see if it helps me too.\r\n> \r\n> Thanks a lot!\r\n\r\nglad that my solution helps.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,611 | 1,619 | 1,619 | NONE | null | Pytorch 1.7.1 with GPU
transformers 3.0.2
Filling all masked positions with "-inf" may cause a NAN issue for softmax function returns. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9700/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9700/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9699 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9699/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9699/comments | https://api.github.com/repos/huggingface/transformers/issues/9699/events | https://github.com/huggingface/transformers/issues/9699 | 789,973,748 | MDU6SXNzdWU3ODk5NzM3NDg= | 9,699 | WANDB_DISABLED env variable not working as expected | {
"login": "Wadaboa",
"id": 1256055,
"node_id": "MDQ6VXNlcjEyNTYwNTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1256055?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Wadaboa",
"html_url": "https://github.com/Wadaboa",
"followers_url": "https://api.github.com/users/Wadaboa/followers",
"following_url": "https://api.github.com/users/Wadaboa/following{/other_user}",
"gists_url": "https://api.github.com/users/Wadaboa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Wadaboa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wadaboa/subscriptions",
"organizations_url": "https://api.github.com/users/Wadaboa/orgs",
"repos_url": "https://api.github.com/users/Wadaboa/repos",
"events_url": "https://api.github.com/users/Wadaboa/events{/privacy}",
"received_events_url": "https://api.github.com/users/Wadaboa/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You're right, thanks for reporting! The PR mentioned above should fix that."
] | 1,611 | 1,611 | 1,611 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.1
- Platform: Linux-5.4.34-1-pve-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@sgugger
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
ray/raytune: @richardliaw @amogkam
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
I'm using modified scripts, but the error is related to a specific function in the `integrations.py` module, as explained below.
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: SQUaD
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Make sure that `wandb` is installed on your system and set the environment variable `WANDB_DISABLED` to "true", which should entirely disable `wandb` logging
2. Create an instance of the `Trainer` class
3. Observe that the Trainer always reports the error "WandbCallback requires wandb to be installed. Run `pip install wandb`."
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I would have expected to disable `wandb`, but instead setting the `WANDB_DISABLED` environment variable completely prevents the user from using `wandb`.
After a bit of digging in the source code, I discovered that the `Trainer` uses the `WandbCallback` class (in `integrations.py`) to handle `wandb` logging. In that class, the `__init__` method has the following lines:
```python
has_wandb = is_wandb_available()
assert has_wandb, "WandbCallback requires wandb to be installed. Run `pip install wandb`."
```
In particular, by checking the `is_wandb_available()` function, we can see that it performs the following check:
```python
if os.getenv("WANDB_DISABLED"):
return False
```
That if statement does not seem to be correct, since environment variables are stored as strings and the truth value of a string depends on whether it is empty or not. So, for example, by not setting the `WANDB_DISABLED` variable at all, then `wandb` would be enabled, but setting it to any value would entirely disable `wandb`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9699/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9699/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9698 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9698/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9698/comments | https://api.github.com/repos/huggingface/transformers/issues/9698/events | https://github.com/huggingface/transformers/issues/9698 | 789,945,749 | MDU6SXNzdWU3ODk5NDU3NDk= | 9,698 | Model Parallelism for DeBERTa | {
"login": "saichandrapandraju",
"id": 41769919,
"node_id": "MDQ6VXNlcjQxNzY5OTE5",
"avatar_url": "https://avatars.githubusercontent.com/u/41769919?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saichandrapandraju",
"html_url": "https://github.com/saichandrapandraju",
"followers_url": "https://api.github.com/users/saichandrapandraju/followers",
"following_url": "https://api.github.com/users/saichandrapandraju/following{/other_user}",
"gists_url": "https://api.github.com/users/saichandrapandraju/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saichandrapandraju/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saichandrapandraju/subscriptions",
"organizations_url": "https://api.github.com/users/saichandrapandraju/orgs",
"repos_url": "https://api.github.com/users/saichandrapandraju/repos",
"events_url": "https://api.github.com/users/saichandrapandraju/events{/privacy}",
"received_events_url": "https://api.github.com/users/saichandrapandraju/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! It's currently not implemented for DeBERTa, unfortunately. Following the document you linked, it should be pretty easy to do it in a script!",
"Hi @LysandreJik ,\r\n\r\nWill DeBERTa (or any of RoBERTa, ALBERT) work if I separate these layers as two or three parts and connect them sequentially?\r\nBecause this is what is happening in [previous link](https://pytorch.org/tutorials/intermediate/model_parallel_tutorial.html#apply-model-parallel-to-existing-modules) that I shared",
"You would need to cast the intermediate hidden states to the correct devices as well. You can see that in the example you shared, see how the intermediate hidden states were cast to cuda 1:\r\n```py\r\n def forward(self, x):\r\n x = self.seq2(self.seq1(x).to('cuda:1'))\r\n return self.fc(x.view(x.size(0), -1))\r\n```",
"Hi @LysandreJik ,\r\n\r\nFor DeBERTa, I'm able to split entire model into 'embedding', 'encoder', 'pooler', 'classifier' and 'dropout' layers as shown in below pic.\r\n\r\n\r\n\r\nWith this approach, I trained on IMDB classification task by assigning 'encoder' to second GPU and others to first 'GPU'. At the end of the training, second GPU consumed lot of memory when compared to first GPU and this resulted in 20-80 split of the entire model.\r\n\r\nSo, I tried splitting encoder layers also as shown below but getting this error - **\"TypeError: forward() takes 1 positional argument but 2 were given\"**\r\n\r\n```\r\nembed = dberta.deberta.embeddings.to('cuda:0')\r\n\r\nf6e = dberta.deberta.encoder.layer[:6].to('cuda:0')\r\n\r\nl6e = dberta.deberta.encoder.layer[6:].to('cuda:1')\r\n\r\npooler = dberta.pooler.to('cuda:0')\r\n\r\nclassifier = dberta.classifier.to('cuda:0')\r\n\r\ndropout = dberta.dropout.to('cuda:0')\r\n\r\ntest = \"this is to test deberta\"\r\n\r\ninp_ids = tok_dberta(test, return_tensors='pt').input_ids\r\natt_mask = tok_dberta(test, return_tensors='pt').attention_mask\r\n\r\nemb_out = embed(inp_ids.to('cuda:0'))\r\n\r\nfirst_6_enc_lay_out = f6e(emb_out)\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-15-379d948e5ba5> in <module>\r\n----> 1 first_6_enc_lay_out = f6e(emb_out)\r\n\r\n/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)\r\n 725 result = self._slow_forward(*input, **kwargs)\r\n 726 else:\r\n--> 727 result = self.forward(*input, **kwargs)\r\n 728 for hook in itertools.chain(\r\n 729 _global_forward_hooks.values(),\r\n\r\nTypeError: forward() takes 1 positional argument but 2 were given\r\n\r\n```\r\n\r\nPlz suggest how to proceed further..",
"Hi @LysandreJik ,\r\n\r\nPlz update on the above issue that I'm facing",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,611 | 1,619 | 1,619 | NONE | null |
Hi,
Is there any way to apply [Model Parallelism](https://pytorch.org/tutorials/intermediate/model_parallel_tutorial.html#apply-model-parallel-to-existing-modules) for DeBERTa ?
I want to run 'microsoft/deberta-large' on 2 GPU's (32 GB each) using [PyTorch's Model Parallelism](https://pytorch.org/tutorials/intermediate/model_parallel_tutorial.html#apply-model-parallel-to-existing-modules) . | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9698/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9698/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9697 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9697/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9697/comments | https://api.github.com/repos/huggingface/transformers/issues/9697/events | https://github.com/huggingface/transformers/pull/9697 | 789,895,962 | MDExOlB1bGxSZXF1ZXN0NTU4MTcxNzUw | 9,697 | Fix TF template | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for fixing!"
] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | # What does this PR do?
Fix a template issue for TF. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9697/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9697/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9697",
"html_url": "https://github.com/huggingface/transformers/pull/9697",
"diff_url": "https://github.com/huggingface/transformers/pull/9697.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9697.patch",
"merged_at": 1611151493000
} |
https://api.github.com/repos/huggingface/transformers/issues/9696 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9696/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9696/comments | https://api.github.com/repos/huggingface/transformers/issues/9696/events | https://github.com/huggingface/transformers/pull/9696 | 789,883,305 | MDExOlB1bGxSZXF1ZXN0NTU4MTYwOTc1 | 9,696 | Add notebook | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | # What does this PR do?
Add a notebook to the list of community notebooks, illustrating how you can fine-tune `LayoutLMForSequenceClassification` for classifying scanned documents, just as invoices or resumes.
## Who can review?
@sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9696/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9696/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9696",
"html_url": "https://github.com/huggingface/transformers/pull/9696",
"diff_url": "https://github.com/huggingface/transformers/pull/9696.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9696.patch",
"merged_at": 1611155966000
} |
https://api.github.com/repos/huggingface/transformers/issues/9695 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9695/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9695/comments | https://api.github.com/repos/huggingface/transformers/issues/9695/events | https://github.com/huggingface/transformers/issues/9695 | 789,872,170 | MDU6SXNzdWU3ODk4NzIxNzA= | 9,695 | The model learns nothing after 3 epochs of training | {
"login": "geminiwenxu",
"id": 41744366,
"node_id": "MDQ6VXNlcjQxNzQ0MzY2",
"avatar_url": "https://avatars.githubusercontent.com/u/41744366?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/geminiwenxu",
"html_url": "https://github.com/geminiwenxu",
"followers_url": "https://api.github.com/users/geminiwenxu/followers",
"following_url": "https://api.github.com/users/geminiwenxu/following{/other_user}",
"gists_url": "https://api.github.com/users/geminiwenxu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/geminiwenxu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/geminiwenxu/subscriptions",
"organizations_url": "https://api.github.com/users/geminiwenxu/orgs",
"repos_url": "https://api.github.com/users/geminiwenxu/repos",
"events_url": "https://api.github.com/users/geminiwenxu/events{/privacy}",
"received_events_url": "https://api.github.com/users/geminiwenxu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead? You'll get more answers over there.\r\n\r\nThanks!"
] | 1,611 | 1,611 | 1,611 | NONE | null | I have trained a multilingual Bert model on 3 different input data configurations ( imbalanced, partial balanced, and full balanced) for the sentiment classification task. Everything works fine so far, except the zero-shot model being trainined on the full balanced dataset (training data: label balanced data; val/test data: label balanced data). however, the result is very weird:
<img width="638" alt="Screen Shot 2021-01-20 at 11 45 47 AM" src="https://user-images.githubusercontent.com/41744366/105165475-afb1b800-5b16-11eb-9d8f-d775fa9a07ee.png">
As you can see, the model has not learned anything, and it classifies everything into neutral in the testing phase.
Could anyone helps please?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9695/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9695/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9694 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9694/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9694/comments | https://api.github.com/repos/huggingface/transformers/issues/9694/events | https://github.com/huggingface/transformers/issues/9694 | 789,835,800 | MDU6SXNzdWU3ODk4MzU4MDA= | 9,694 | ModuleAttributeError: 'GPT2LMHeadModel' object has no attribute 'backward' | {
"login": "Octopirate1",
"id": 35666310,
"node_id": "MDQ6VXNlcjM1NjY2MzEw",
"avatar_url": "https://avatars.githubusercontent.com/u/35666310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Octopirate1",
"html_url": "https://github.com/Octopirate1",
"followers_url": "https://api.github.com/users/Octopirate1/followers",
"following_url": "https://api.github.com/users/Octopirate1/following{/other_user}",
"gists_url": "https://api.github.com/users/Octopirate1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Octopirate1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Octopirate1/subscriptions",
"organizations_url": "https://api.github.com/users/Octopirate1/orgs",
"repos_url": "https://api.github.com/users/Octopirate1/repos",
"events_url": "https://api.github.com/users/Octopirate1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Octopirate1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> It would appear that line 1286 in trainer.py actually calls the backward method on the model, not the loss object. I will try rebuilding after fixing that line and seeing if it helps.\r\n\r\nThis is incorrect. It appears that the ``model_wrapped.module`` in the aforementioned trainer.py actually resolves to GPT2LMHeadModel. Another big shot in the dark, but maybe ``model_wrapped`` is never actually wrapping because I'm only using one GPU? It's very late where I live, I'll take another shot at this in the morning.",
"You then need to launch your script with the `deepspeed` launcher. Could you tell us which command you ran?\r\nAlso cc @stas00 since he added deepspeed to Trainer.",
"Yes, please tag me on any deepspeed issues.\r\n\r\nThank you for this report.\r\n\r\nI think it's a bug, it should be:\r\n\r\n```\r\nself.deepspeed.backward(loss)\r\n```\r\n\r\nI will test and send a fix.\r\n",
"The merged PR closed this report, but should you still have an issue please don't hesitate to re-open it. \r\n",
"Hello,\r\nThanks for this great deepspeed feature. I am also running into the same error both for \r\nDistilBertForSequenceClassification' object has no attribute 'backward'\r\nand for\r\nBertForSequenceClassification object has no attribute 'backward'\r\n\r\nhere is the full error:\r\n\r\n> ---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\n<ipython-input-6-beaae64139c1> in <module>\r\n 23 )\r\n 24 \r\n---> 25 trainer.train()\r\n\r\n~/anaconda3/lib/python3.7/site-packages/transformers/trainer.py in train(self, model_path, trial)\r\n 886 tr_loss += self.training_step(model, inputs)\r\n 887 else:\r\n--> 888 tr_loss += self.training_step(model, inputs)\r\n 889 self._total_flos += self.floating_point_ops(inputs)\r\n 890 \r\n\r\n~/anaconda3/lib/python3.7/site-packages/transformers/trainer.py in training_step(self, model, inputs)\r\n 1263 elif self.deepspeed:\r\n 1264 # calling on DS engine (model_wrapped == DDP(Deepspeed(PretrainedModule)))\r\n-> 1265 self.model_wrapped.module.backward(loss)\r\n 1266 else:\r\n 1267 loss.backward()\r\n\r\n~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __getattr__(self, name)\r\n 574 return modules[name]\r\n 575 raise AttributeError(\"'{}' object has no attribute '{}'\".format(\r\n--> 576 type(self).__name__, name))\r\n 577 \r\n 578 def __setattr__(self, name, value):\r\n\r\nAttributeError: 'DistilBertForSequenceClassification' object has no attribute 'backward'\r\n\r\nAny idea?\r\nThanks",
"@victorstorchan, can you please ensure you use an up-to-date master?",
"Thanks for your answer. I just pip installed transformers 1h ago. It should be up-to-date right?",
"no, it won't. pip installs the released version. you need the unreleased master build, which there are several ways to go about, one of them is just:\r\n\r\n```\r\npip install git+https://github.com/huggingface/transformers\r\n```\r\n\r\n\r\n\r\n",
"My bad! Thanks @stas00 ",
"You did nothing wrong, @victorstorchan. \r\n\r\nI will propose an update to the installation page so that the distinction is loud and clear."
] | 1,611 | 1,611 | 1,611 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.1
- Platform: Linux-4.19.0-12-cloud-amd64-x86_64-with-debian-10.6
- Python version: 3.7.8
- PyTorch version (GPU?): 1.6.0a0+9907a3e (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No(?)
### Who can help
albert, bert, GPT2, XLM: @LysandreJik
Trainer: @sgugger
## To reproduce
Steps to reproduce the behavior:
1. Set up a TrainingArguments for a GPT2LMHeadModel with the following deepspeed config:
```
{
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"hysteresis": 2,
"min_loss_scale": 1
},
"zero_optimization": {
"stage": 2,
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"overlap_comm": true,
"contiguous_gradients": true,
"cpu_offload": false
},
"optimizer": {
"type": "Adam",
"params": {
"adam_w_mode": true,
"lr": 3e-5,
"betas": [ 0.9, 0.999 ],
"eps": 1e-8,
"weight_decay": 3e-7
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 3e-5,
"warmup_num_steps": 500
}
}
}
```
2. Attempt to call `trainer.train()`.
## Expected behavior
Training should begin as expected.
## Believed bug location
It would appear that [line 1286 in trainer.py](https://github.com/huggingface/transformers/blob/76f36e183a825b8e5576256f4e057869b2e2df29/src/transformers/trainer.py#L1286) actually calls the `backward` method on the *model*, not the loss object. I will try rebuilding after fixing that line and seeing if it helps.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9694/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9694/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9693 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9693/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9693/comments | https://api.github.com/repos/huggingface/transformers/issues/9693/events | https://github.com/huggingface/transformers/issues/9693 | 789,832,985 | MDU6SXNzdWU3ODk4MzI5ODU= | 9,693 | ModuleAttributeError: 'GPT2LMHeadModel' object has no attribute 'backward' | {
"login": "Octopirate1",
"id": 35666310,
"node_id": "MDQ6VXNlcjM1NjY2MzEw",
"avatar_url": "https://avatars.githubusercontent.com/u/35666310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Octopirate1",
"html_url": "https://github.com/Octopirate1",
"followers_url": "https://api.github.com/users/Octopirate1/followers",
"following_url": "https://api.github.com/users/Octopirate1/following{/other_user}",
"gists_url": "https://api.github.com/users/Octopirate1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Octopirate1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Octopirate1/subscriptions",
"organizations_url": "https://api.github.com/users/Octopirate1/orgs",
"repos_url": "https://api.github.com/users/Octopirate1/repos",
"events_url": "https://api.github.com/users/Octopirate1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Octopirate1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"https://github.com/huggingface/transformers/issues/9694"
] | 1,611 | 1,611 | 1,611 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version: 3.7
- PyTorch version (GPU?):
- Tensorflow version (GPU?): None
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No?
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
Trainer: @sgugger
## To reproduce
Steps to reproduce the behavior:
1. Set up a TrainingArguments for a GPT2LMHeadModel with the following deepspeed config:
`{
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"hysteresis": 2,
"min_loss_scale": 1
},
"zero_optimization": {
"stage": 2,
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"overlap_comm": true,
"contiguous_gradients": true,
"cpu_offload": false
},
"optimizer": {
"type": "Adam",
"params": {
"adam_w_mode": true,
"lr": 3e-5,
"betas": [ 0.9, 0.999 ],
"eps": 1e-8,
"weight_decay": 3e-7
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 3e-5,
"warmup_num_steps": 500
}
}
}`
2. Attempt to train.
## Expected behavior
Training should begin as expected.
## Believed bug location
It would appear that [line 1286 in trainer.py](https://github.com/huggingface/transformers/blob/76f36e183a825b8e5576256f4e057869b2e2df29/src/transformers/trainer.py#L1286) actually calls the `backward` method on the *model*, not the loss object. I will try rebuilding after fixing that line and seeing if it helps.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9693/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9693/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9692 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9692/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9692/comments | https://api.github.com/repos/huggingface/transformers/issues/9692/events | https://github.com/huggingface/transformers/issues/9692 | 789,832,723 | MDU6SXNzdWU3ODk4MzI3MjM= | 9,692 | input one model's output to another one | {
"login": "omerarshad",
"id": 16164105,
"node_id": "MDQ6VXNlcjE2MTY0MTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/16164105?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omerarshad",
"html_url": "https://github.com/omerarshad",
"followers_url": "https://api.github.com/users/omerarshad/followers",
"following_url": "https://api.github.com/users/omerarshad/following{/other_user}",
"gists_url": "https://api.github.com/users/omerarshad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omerarshad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omerarshad/subscriptions",
"organizations_url": "https://api.github.com/users/omerarshad/orgs",
"repos_url": "https://api.github.com/users/omerarshad/repos",
"events_url": "https://api.github.com/users/omerarshad/events{/privacy}",
"received_events_url": "https://api.github.com/users/omerarshad/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead? You'll get more answers there!\r\n\r\nThanks!"
] | 1,611 | 1,611 | 1,611 | NONE | null | Hello,
I want to create a model which generates text and the generated text is input to other model. So basically two models are trained together. How can i achieve this using hugging face?
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9692/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9692/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9691 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9691/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9691/comments | https://api.github.com/repos/huggingface/transformers/issues/9691/events | https://github.com/huggingface/transformers/pull/9691 | 789,778,318 | MDExOlB1bGxSZXF1ZXN0NTU4MDczNTQy | 9,691 | Add DeBERTa head models | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for the review @LysandreJik, the test did fail because of the pooler. Is fixed now!"
] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | This PR adds 3 head models on top of the DeBERTa base model: `DebertaForMaskedLM`, `DebertaForTokenClassification`, `DebertaForQuestionAnswering`. These are mostly copied from `modeling_bert.py` with bert->deberta.
## Who can review?
@LysandreJik
Also tagging original DeBERTa author: @BigBird01
Fixes #9689 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9691/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9691/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9691",
"html_url": "https://github.com/huggingface/transformers/pull/9691",
"diff_url": "https://github.com/huggingface/transformers/pull/9691.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9691.patch",
"merged_at": 1611155931000
} |
https://api.github.com/repos/huggingface/transformers/issues/9690 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9690/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9690/comments | https://api.github.com/repos/huggingface/transformers/issues/9690/events | https://github.com/huggingface/transformers/issues/9690 | 789,754,626 | MDU6SXNzdWU3ODk3NTQ2MjY= | 9,690 | Is there a C++ interface? | {
"login": "duan348733684",
"id": 26431015,
"node_id": "MDQ6VXNlcjI2NDMxMDE1",
"avatar_url": "https://avatars.githubusercontent.com/u/26431015?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/duan348733684",
"html_url": "https://github.com/duan348733684",
"followers_url": "https://api.github.com/users/duan348733684/followers",
"following_url": "https://api.github.com/users/duan348733684/following{/other_user}",
"gists_url": "https://api.github.com/users/duan348733684/gists{/gist_id}",
"starred_url": "https://api.github.com/users/duan348733684/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/duan348733684/subscriptions",
"organizations_url": "https://api.github.com/users/duan348733684/orgs",
"repos_url": "https://api.github.com/users/duan348733684/repos",
"events_url": "https://api.github.com/users/duan348733684/events{/privacy}",
"received_events_url": "https://api.github.com/users/duan348733684/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"No, only Python.",
"> No, only Python.\r\n\r\nthx.\r\nIt means that using torch cannot call bert with c++, right?"
] | 1,611 | 1,611 | 1,611 | NONE | null |
Is there a C++ interface? transformers | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9690/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9690/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9689 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9689/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9689/comments | https://api.github.com/repos/huggingface/transformers/issues/9689/events | https://github.com/huggingface/transformers/issues/9689 | 789,647,423 | MDU6SXNzdWU3ODk2NDc0MjM= | 9,689 | MLM training for DeBERTa not supported: configuration class is missing | {
"login": "xiaolin-cheng",
"id": 16944705,
"node_id": "MDQ6VXNlcjE2OTQ0NzA1",
"avatar_url": "https://avatars.githubusercontent.com/u/16944705?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiaolin-cheng",
"html_url": "https://github.com/xiaolin-cheng",
"followers_url": "https://api.github.com/users/xiaolin-cheng/followers",
"following_url": "https://api.github.com/users/xiaolin-cheng/following{/other_user}",
"gists_url": "https://api.github.com/users/xiaolin-cheng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xiaolin-cheng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiaolin-cheng/subscriptions",
"organizations_url": "https://api.github.com/users/xiaolin-cheng/orgs",
"repos_url": "https://api.github.com/users/xiaolin-cheng/repos",
"events_url": "https://api.github.com/users/xiaolin-cheng/events{/privacy}",
"received_events_url": "https://api.github.com/users/xiaolin-cheng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Looking at the [docs](https://huggingface.co/transformers/model_doc/deberta.html), it seems like there's currently no `DeBERTaForMaskedLM` defined. I will make a PR that adds this."
] | 1,611 | 1,611 | 1,611 | NONE | null | When I ran the example script run_mlm.py to fine tune the pretrained deberta model on a customized dataset, I got the following error. The same command worked for roberta-base.
The command:
python run_mlm.py --model_name_or_path 'microsoft/deberta-base' --train_file slogans/train.txt --validation_file slogans/test.txt --do_train --do_eval --per_device_train_batch_size 64 --per_device_eval_batch_size 64 --learning_rate 1e-3 --num_train_epochs 10 --output_dir /home/jovyan/share2/xiaolin/models/mlm/temp --save_steps 5000 --logging_steps 100
The terminal error:
Traceback (most recent call last):
File "run_mlm.py", line 409, in <module>
main()
File "run_mlm.py", line 264, in main
cache_dir=model_args.cache_dir,
File "/home/jovyan/.local/lib/python3.7/site-packages/transformers/models/auto/modeling_auto.py", line 1093, in from_pretrained
config.__class__, cls.__name__, ", ".join(c.__name__ for c in MODEL_FOR_MASKED_LM_MAPPING.keys())
ValueError: Unrecognized configuration class <class 'transformers.models.deberta.configuration_deberta.DebertaConfig'> for this kind of AutoModel: AutoModelForMaskedLM.
Model type should be one of LayoutLMConfig, DistilBertConfig, AlbertConfig, BartConfig, CamembertConfig, XLMRobertaConfig, LongformerConfig, RobertaConfig, SqueezeBertConfig, BertConfig, MobileBertConfig, FlaubertConfig, XLMConfig, ElectraConfig, ReformerConfig, FunnelConfig, MPNetConfig, TapasConfig. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9689/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9689/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9688 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9688/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9688/comments | https://api.github.com/repos/huggingface/transformers/issues/9688/events | https://github.com/huggingface/transformers/issues/9688 | 789,612,309 | MDU6SXNzdWU3ODk2MTIzMDk= | 9,688 | [Open in Colab] links not working in examples/README.md | {
"login": "wilcoln",
"id": 24209192,
"node_id": "MDQ6VXNlcjI0MjA5MTky",
"avatar_url": "https://avatars.githubusercontent.com/u/24209192?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wilcoln",
"html_url": "https://github.com/wilcoln",
"followers_url": "https://api.github.com/users/wilcoln/followers",
"following_url": "https://api.github.com/users/wilcoln/following{/other_user}",
"gists_url": "https://api.github.com/users/wilcoln/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wilcoln/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wilcoln/subscriptions",
"organizations_url": "https://api.github.com/users/wilcoln/orgs",
"repos_url": "https://api.github.com/users/wilcoln/repos",
"events_url": "https://api.github.com/users/wilcoln/events{/privacy}",
"received_events_url": "https://api.github.com/users/wilcoln/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @wilcoln \r\n\r\nYes, the links point to Github, feel free to open a PR to replace the GitHub links with colab :). Thanks!"
] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | For the following tasks below, the  button contains github links instead of colab links.
- question-answering
- text-classification
- token-classification
@sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9688/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9688/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9687 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9687/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9687/comments | https://api.github.com/repos/huggingface/transformers/issues/9687/events | https://github.com/huggingface/transformers/issues/9687 | 789,519,989 | MDU6SXNzdWU3ODk1MTk5ODk= | 9,687 | Can't load previously built tokenizers | {
"login": "hadasah",
"id": 7191484,
"node_id": "MDQ6VXNlcjcxOTE0ODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/7191484?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hadasah",
"html_url": "https://github.com/hadasah",
"followers_url": "https://api.github.com/users/hadasah/followers",
"following_url": "https://api.github.com/users/hadasah/following{/other_user}",
"gists_url": "https://api.github.com/users/hadasah/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hadasah/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hadasah/subscriptions",
"organizations_url": "https://api.github.com/users/hadasah/orgs",
"repos_url": "https://api.github.com/users/hadasah/repos",
"events_url": "https://api.github.com/users/hadasah/events{/privacy}",
"received_events_url": "https://api.github.com/users/hadasah/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hello! To make sure I understand your issue, you're doing the following:\r\n\r\n```py\r\nAutoTokenizer.from_pretrained('facebook/blenderbot-400M-distill')\r\n```\r\n\r\non a node which has internet access, and then you're doing the same once you have no internet access. You want the library to rely on the cache that it had previously downloaded, is that right?\r\n\r\nCould you make sure you are up to date with the `master` branch, and try the following once you have no internet access:\r\n\r\n```py\r\nAutoTokenizer.from_pretrained('facebook/blenderbot-400M-distill', local_files_only=True)\r\n```\r\n\r\nThank you.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,611 | 1,619 | 1,619 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.1
- Platform: Linux-3.10.0-957.5.1.el7.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: not for this part that triggers this error
- Using distributed or parallel set-up in script?: n
### Who can help
Probably @mfuntowicz or @patrickvonplaten
## Information
N/A -- none of the fields here applied
## To reproduce
Context: I work on cluster where most nodes don't have internet access. Therefore I pre-build tokenizers, models, etc., in cli on nodes with internet access and then make sure that I can access the local caches on other nodes. That last part -- accessing the tokenizer I've built -- is failing for BlenderBot 400M distilled tokenizer. It's also failing for blenderbot small 90M which I also built today, potentially for others too, but it doesn't seem to be failing for roberta-base, which I had built before (and is a tokenizer small rather than base).
1. `AutoTokenizer.from_pretrained('facebook/blenderbot-400M-distill')` from a node with internet access
2. the same as above, from a node without internet access
3. You should see this error getting triggered : [https://github.com/huggingface/transformers/blob/14d677ca4a62facf70b28f2922b12e6cd3692a03/src/transformers/file_utils.py#L1234](url)
Here's the specific Traceback:
`Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lusers/margsli/miniconda3/envs/latest/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 388, in from_pretrained
return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/usr/lusers/margsli/miniconda3/envs/latest/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1738, in from_pretrained
resolved_vocab_files[file_id] = cached_path(
File "/usr/lusers/margsli/miniconda3/envs/latest/lib/python3.8/site-packages/transformers/file_utils.py", line 1048, in cached_path
output_path = get_from_cache(
File "/usr/lusers/margsli/miniconda3/envs/latest/lib/python3.8/site-packages/transformers/file_utils.py", line 1234, in get_from_cache
raise ValueError(
ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.`
Dug around a little and found that cached_path() gets called 7 filenames/urls when I'm offline and only 6 when I'm online (I printed cache_path every time cached_path() gets called) -- the last one is not seen when offline, and that's the one that triggers the error. Printed the same things for other tokenizers I had previously built and didn't see this. Not sure if that's helpful, but it was as far as I got during my debugging.
## Expected behavior
no error
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9687/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9687/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9686 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9686/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9686/comments | https://api.github.com/repos/huggingface/transformers/issues/9686/events | https://github.com/huggingface/transformers/issues/9686 | 789,493,552 | MDU6SXNzdWU3ODk0OTM1NTI= | 9,686 | BertGenerationDecoder .generate() issue during inference with PyTorch Lightning | {
"login": "anicolson",
"id": 26111230,
"node_id": "MDQ6VXNlcjI2MTExMjMw",
"avatar_url": "https://avatars.githubusercontent.com/u/26111230?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anicolson",
"html_url": "https://github.com/anicolson",
"followers_url": "https://api.github.com/users/anicolson/followers",
"following_url": "https://api.github.com/users/anicolson/following{/other_user}",
"gists_url": "https://api.github.com/users/anicolson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anicolson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anicolson/subscriptions",
"organizations_url": "https://api.github.com/users/anicolson/orgs",
"repos_url": "https://api.github.com/users/anicolson/repos",
"events_url": "https://api.github.com/users/anicolson/events{/privacy}",
"received_events_url": "https://api.github.com/users/anicolson/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @anicolson ,\r\n\r\nWe would love to help, but sadly when you post such a long script it will be very hard and time-consuming for us to take a look at. We're happy to assist if you could provide a short, precise, and complete code snippet that is based on Transformers Seq2SeqTrainer only. Here's our guide on [how to request support](https://discuss.huggingface.co/t/how-to-request-support/3128).\r\n\r\nAlso from what I can see, seems like you are initializing bert encoder and bert decoder separately, you could directly instantiate it using the `EncoderDecoder` model class to get a seq2seq model. Here are two colab notebooks that show how to train `EncoderDecoder` models using `Seq2SeqTrainer`. The notebooks show how to fine-tune for summarization task, but could be easily adapted for translation as well.\r\n\r\n[Leverage BERT for Encoder-Decoder Summarization on CNN/Dailymail](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/BERT2BERT_for_CNN_Dailymail.ipynb)\r\n\r\n[Leverage RoBERTa for Encoder-Decoder Summarization on BBC XSum](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/RoBERTaShared_for_BBC_XSum.ipynb)",
"Thanks for your reply, \r\n\r\nI am attempting to create a shorter version that is not so time-consuming.\r\n\r\nCertainly, the `EncoderDecoder` is an attractive option if one is using natural language, but I would like to highlight that using `BertGenerateDecoder` allows the user to provide any sequence for cross-attention, even those derived from encoders that operate on modalities other than natural language, which I think is powerful.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"> Thanks for your reply,\r\n> \r\n> I am attempting to create a shorter version that is not so time-consuming.\r\n> \r\n> Certainly, the `EncoderDecoder` is an attractive option if one is using natural language, but I would like to highlight that using `BertGenerateDecoder` allows the user to provide any sequence for cross-attention, even those derived from encoders that operate on modalities other than natural language, which I think is powerful.\r\n\r\nHi, have you tackled the problem? I encounter the exactly same problem. Any cues?"
] | 1,611 | 1,689 | 1,619 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.1
- Platform: Ubuntu 20.04.1 LTS
- Python version: 3.8.5
- PyTorch version: 1.7.1
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Tried both distributed and parallel
### Who can help
TextGeneration: @TevenLeScao
Text Generation: @patrickvonplaten
examples/seq2seq: @patil-suraj
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
-->
## Information
I am using BertGenerationEncoder and BertGenerationDecoder. I am using `transformers` in combination with PyTorch lightning.
At inference, `.generate()` outputs the same thing for each input.
I am unsure of why this is occurring, my only hunch is that PyTorch lighting is somehow blocking the outputs of the encoder to reach the decoder for cross-attention? As the outputs seem as though the decoder is just given the `[BOS]` token only for each input during inference.
The task that I am demonstrating this issue on is:
* WMT'14 English to German.
I have had this problem occur on different tasks as well. Using WMT'14 English to German to demonstrate.
## To reproduce
I have tried to simplify this down, but unfortunately, the example is still long. Sorry about that. Please let me know if something does not work.
If torchnlp is not installed: `pip install pytorch-nlp`
If pytorch_lightning is not installed: `pip install pytorch-lightning `
```
from torchnlp.datasets.wmt import wmt_dataset
import torch
import torch.nn as nn
from pytorch_lightning.core.datamodule import LightningDataModule
from pytorch_lightning.metrics.functional.nlp import bleu_score
import pytorch_lightning as pl
from transformers import (
BertGenerationConfig,
BertGenerationEncoder,
BertGenerationDecoder,
)
from transformers import AutoTokenizer
import os
import numpy as np
import multiprocessing
class Dataset(LightningDataModule):
def __init__(
self,
mbatch_size,
dataset_path,
encoder_tokenizer,
decoder_tokenizer,
max_len=None,
**kwargs,
):
super().__init__()
self.mbatch_size = mbatch_size
self.dataset_path = dataset_path
self.encoder_tokenizer = encoder_tokenizer
self.decoder_tokenizer = decoder_tokenizer
self.max_len = max_len
## Number of workers for DataLoader
self.n_workers = multiprocessing.cpu_count()
def setup(self, stage=None):
## Assign train & validation sets
if stage == "fit" or stage is None:
train_iterator, val_iterator = wmt_dataset(
directory=self.dataset_path,
train=True,
dev=True,
)
self.train_set = Set(
train_iterator,
self.encoder_tokenizer,
self.decoder_tokenizer,
self.max_len,
)
self.val_set = Set(
val_iterator,
self.encoder_tokenizer,
self.decoder_tokenizer,
self.max_len,
)
## Assign test set
if stage == "test" or stage is None:
test_iterator = wmt_dataset(directory=self.dataset_path, test=True)
self.test_set = Set(
test_iterator,
self.encoder_tokenizer,
self.decoder_tokenizer,
self.max_len,
)
def train_dataloader(self):
return DataLoader(
self.train_set,
batch_size=self.mbatch_size,
num_workers=self.n_workers,
shuffle=True,
)
def val_dataloader(self):
return DataLoader(
self.val_set,
batch_size=self.mbatch_size,
num_workers=self.n_workers,
)
def test_dataloader(self):
return DataLoader(
self.test_set,
batch_size=self.mbatch_size,
num_workers=self.n_workers,
)
class Set(torch.utils.data.Dataset):
def __init__(
self,
iterator,
encoder_tokenizer,
decoder_tokenizer,
max_len,
):
self.iterator = iterator
self.encoder_tokenizer = encoder_tokenizer
self.decoder_tokenizer = decoder_tokenizer
self.n_examples = len(self.iterator)
self.max_len = max_len
def __getitem__(self, index):
example = self.iterator[index]
english_encoded = self.encoder_tokenizer(
example["en"],
return_tensors="pt",
padding="max_length",
truncation=True,
max_length=self.max_len,
)
german_encoded = self.decoder_tokenizer(
example["de"],
return_tensors="pt",
padding="max_length",
truncation=True,
max_length=self.max_len,
)
return {
"input_ids": english_encoded["input_ids"][0],
"token_type_ids": english_encoded["token_type_ids"][0],
"attention_mask": english_encoded["attention_mask"][0],
"decoder_input_ids": german_encoded["input_ids"][0],
"decoder_token_type_ids": german_encoded["token_type_ids"][0],
"decoder_attention_mask": german_encoded["attention_mask"][0],
}
def __len__(self):
return self.n_examples
class BERT2BERT(nn.Module):
def __init__(self, **kwargs):
super(BERT2BERT, self).__init__()
assert "ckpt_base" in kwargs, "ckpt_base must be passed."
self.ckpt_base = kwargs["ckpt_base"]
## Tokenizer
assert (
"encoder_tokenizer" in kwargs
), "A tokenizer for the encoder must be passed."
assert (
"decoder_tokenizer" in kwargs
), "A tokenizer for the decoder must be passed."
self.encoder_tokenizer = kwargs["encoder_tokenizer"]
self.decoder_tokenizer = kwargs["decoder_tokenizer"]
## Encoder
assert "encoder_init" in kwargs, "Set encoder_init in config file."
self.encoder_init = kwargs["encoder_init"]
ckpt_dir = os.path.join(self.ckpt_base, self.encoder_init)
self.encoder = BertGenerationEncoder.from_pretrained(ckpt_dir)
## Decoder
assert "decoder_init" in kwargs, "Set decoder_init in config file."
self.decoder_init = kwargs["decoder_init"]
ckpt_dir = os.path.join(self.ckpt_base, self.decoder_init)
config = BertGenerationConfig.from_pretrained(ckpt_dir)
config.is_decoder = True
config.add_cross_attention = True
config.bos_token_id = self.decoder_tokenizer.cls_token_id
config.eos_token_id = self.decoder_tokenizer.sep_token_id
config.pad_token_id = self.decoder_tokenizer.pad_token_id
config.max_length = kwargs["max_length"] if "max_length" in kwargs else 20
config.min_length = kwargs["min_length"] if "min_length" in kwargs else 10
config.no_repeat_ngram_size = (
kwargs["no_repeat_ngram_size"] if "no_repeat_ngram_size" in kwargs else 0
)
config.early_stopping = (
kwargs["early_stopping"] if "early_stopping" in kwargs else False
)
config.length_penalty = (
kwargs["length_penalty"] if "length_penalty" in kwargs else 1.0
)
config.num_beams = kwargs["num_beams"] if "num_beams" in kwargs else 1
self.decoder = BertGenerationDecoder.from_pretrained(
ckpt_dir,
config=config,
)
def forward(self, x):
## Get last hidden state of the encoder
encoder_hidden_state = self.encoder(
input_ids=x["input_ids"],
attention_mask=x["attention_mask"],
).last_hidden_state
## Teacher forcing: labels are given as input
outp = self.decoder(
input_ids=x["decoder_input_ids"],
attention_mask=x["decoder_attention_mask"],
encoder_hidden_states=encoder_hidden_state,
)
return outp["logits"]
def generate(self, input_ids, attention_mask):
## Get last hidden state of the encoder
encoder_hidden_state = self.encoder(
input_ids=input_ids,
attention_mask=attention_mask,
).last_hidden_state
print("\n Output of encoder:")
print(encoder_hidden_state)
bos_ids = (
torch.ones(
(encoder_hidden_state.size()[0], 1),
dtype=torch.long,
device=self.decoder.device,
)
* self.decoder.config.bos_token_id
)
## Autoregresively generate predictions
return self.decoder.generate(
input_ids=bos_ids,
encoder_hidden_states=encoder_hidden_state,
)
class Seq2Seq(pl.LightningModule):
def __init__(
self,
encoder_init,
decoder_init,
encoder_tokenizer,
decoder_tokenizer,
permute_outp=False,
ckpt_base="",
ver="tmp",
print_model=True,
**kwargs,
):
super(Seq2Seq, self).__init__()
self.save_hyperparameters()
self.permute_outp = permute_outp
self.ckpt_base = ckpt_base
self.ver = ver
self.encoder_tokenizer = encoder_tokenizer
self.decoder_tokenizer = decoder_tokenizer
self.seq2seq = BERT2BERT(
encoder_init=encoder_init,
decoder_init=decoder_init,
encoder_tokenizer=encoder_tokenizer,
decoder_tokenizer=decoder_tokenizer,
ckpt_base=ckpt_base,
**kwargs,
)
## Loss function
self.loss = torch.nn.CrossEntropyLoss()
def forward(self, x):
## Iterate through the networks
return self.seq2seq(x)
def training_step(self, batch, batch_idx):
## Target
y = batch["decoder_input_ids"]
## Inference
y_hat = self(batch)
## Permute output
if self.permute_outp:
y_hat = y_hat.permute(*self.permute_outp)
## Loss
train_loss = self.loss(y_hat, y)
## Compute and log metrics
logs = {"train_loss": train_loss}
self.log_dict(logs, on_step=False, on_epoch=True)
######### TEMPORARY!!!
if batch_idx % 100 == 0:
pred = self.seq2seq.generate(
batch["input_ids"],
batch["attention_mask"],
)
pred_str = self.decoder_tokenizer.batch_decode(pred, skip_special_tokens=True)
ref_str = self.decoder_tokenizer.batch_decode(y, skip_special_tokens=True)
print("\nTraining reference labels:")
print(ref_str)
print("\n Training predictions:")
print(pred_str)
print("\n\n")
## Return training loss
return train_loss
def validation_step(self, batch, batch_idx):
print("\n\n\n Validation input_ids:")
print(batch["input_ids"])
## Generate outputs autoregresively
pred = self.seq2seq.generate(
batch["input_ids"],
batch["attention_mask"],
)
pred_str = self.decoder_tokenizer.batch_decode(pred, skip_special_tokens=True)
ref_str = self.decoder_tokenizer.batch_decode(batch["decoder_input_ids"], skip_special_tokens=True)
print("Validation reference labels:")
print(ref_str)
print("Validation predictions:")
print(pred_str)
print("\n\n")
pred_str = [i.split() for i in pred_str]
ref_str = [i.split() for i in ref_str]
self.log_dict({"val_bleu": bleu_score(pred_str, ref_str)})
def test_step(self, batch, batch_idx):
## Generate outputs autoregresively
pred = self.seq2seq.generate(
batch["input_ids"],
batch["attention_mask"],
)
pred_str = self.decoder_tokenizer.batch_decode(pred, skip_special_tokens=True)
ref_str = self.decoder_tokenizer.batch_decode(batch["decoder_input_ids"], skip_special_tokens=True)
pred_str = [i.split() for i in pred_str]
ref_str = [i.split() for i in ref_str]
self.log_dict({"test_bleu": bleu_score(pred_str, ref_str)})
def configure_optimizers(self):
self.optimisers = [torch.optim.Adam(self.parameters(), lr=4e-5)]
return self.optimisers
if __name__ == "__main__":
ckpt_base = ""
encoder_init = "bert-base-uncased"
decoder_init = "dbmdz/bert-base-german-uncased"
dataset_path = ""
encoder_tokenizer = AutoTokenizer.from_pretrained(
os.path.join(ckpt_base, encoder_init),
)
decoder_tokenizer = AutoTokenizer.from_pretrained(
os.path.join(ckpt_base, decoder_init),
)
dataset = Dataset(
mbatch_size=4,
dataset_path=dataset_path,
encoder_tokenizer=encoder_tokenizer,
decoder_tokenizer=decoder_tokenizer,
max_len=512,
)
trainer = pl.Trainer(
max_epochs=2,
num_sanity_val_steps=0,
fast_dev_run=True,
accelerator="ddp" if torch.cuda.device_count() > 1 else None,
gpus=torch.cuda.device_count() if torch.cuda.is_available() else None,
precision=16 if torch.cuda.is_available() else 32,
log_gpu_memory=log_gpu_memory if torch.cuda.is_available() else False,
plugins=plugins if torch.cuda.device_count() > 1 else None,
)
seq2seq = Seq2Seq(
encoder_init=encoder_init,
decoder_init=decoder_init,
encoder_tokenizer=encoder_tokenizer,
decoder_tokenizer=decoder_tokenizer,
ckpt_base=ckpt_base,
permute_outp=[0, 2, 1],
)
trainer.fit(seq2seq, datamodule=dataset)
# trainer.test(seq2seq, datamodule=dataset)
```
## Outputs of script demonstrating the issue
#### During training:
Output of encoder (to demonstrate that there is a difference per input):
```
tensor([[[-0.1545, 0.0785, 0.4573, ..., -0.3254, 0.5409, 0.4258],
[ 0.2935, -0.1310, 0.4843, ..., -0.4160, 0.8018, 0.2589],
[ 0.0649, -0.5836, 1.9177, ..., -0.3412, 0.2852, 0.8098],
...,
[ 0.1109, 0.1653, 0.5843, ..., -0.3402, 0.1081, 0.2566],
[ 0.3011, 0.0258, 0.4950, ..., -0.2070, 0.1684, -0.0199],
[-0.1004, -0.0299, 0.4860, ..., -0.2958, -0.1653, 0.0719]],
[[-0.3105, 0.0351, -0.5714, ..., -0.1062, 0.3461, 0.8927],
[ 0.0727, 0.2580, -0.6962, ..., 0.3195, 0.9559, 0.6534],
[-0.6213, 0.9008, 0.2194, ..., 0.1259, 0.1122, 0.7071],
...,
[ 0.2667, -0.1453, -0.2017, ..., 0.5667, -0.0772, -0.2298],
[ 0.4050, 0.0916, 0.2218, ..., 0.0295, -0.2065, 0.1230],
[-0.1895, 0.0259, -0.1619, ..., -0.1657, -0.0760, -0.6030]],
[[-0.1366, 0.2778, 0.1203, ..., -0.4764, 0.4009, 0.2918],
[ 0.2401, -0.2308, 1.1218, ..., -0.2140, 0.7054, 0.6656],
[-0.7005, -0.9183, 1.6280, ..., 0.2339, -0.1870, 0.0630],
...,
[-0.0212, -0.2678, 0.0711, ..., 0.2884, 0.3741, -0.2103],
[-0.0058, -0.2364, 0.2587, ..., 0.0689, 0.2010, -0.0315],
[ 0.1869, -0.0784, 0.2257, ..., -0.1498, 0.0935, -0.0234]],
[[ 0.1023, 0.0532, 0.2052, ..., -0.5335, 0.0676, 0.2436],
[-0.2254, 1.0484, -0.1338, ..., -0.9030, -0.1407, -0.2173],
[-0.8384, 0.3990, 0.6661, ..., -0.4869, 0.7780, -0.5461],
...,
[ 0.4410, 0.1868, 0.6844, ..., -0.2972, -0.1069, -0.1848],
[-0.0021, -0.0537, 0.2477, ..., 0.1877, -0.0479, -0.3762],
[ 0.1981, 0.0980, 0.3827, ..., 0.1449, 0.0403, -0.2863]]],
grad_fn=<NativeLayerNormBackward>)
```
Training reference labels:
```
[
'pau @ @ schal @ @ preis 80 € / person auf basis von 2 person @ @ nen.',
'ich finde es be @ @ denk @ @ lich, dass der bericht, den wir im ausschuss angenommen haben, so unterschiedlich ausgelegt wird.',
'die globalisierung hat eine betrachtliche veranderung der bedeutung ge @ @ ok @ @ ultur @ @ eller regionen in der welt mit sich gebracht.',
'falls sie eigentumer einer immobili @ @ e in andor @ @ ra sind, kontaktieren sie uns, um ihr apartment oder hotel hier auf @ @ zun @ @ ehem @ @ en.',
]
```
Training predictions after `.generate()` and `.batch_decode()` (garbage, but different per input):
```
[
'##exe int int int int fid fid fid fid fid fid fid fid fid fid fid fid lanz urn',
'##schleschleually vno stadien stadien stadienherzherzherzherzherzherzherzherzherzherzherzherz', '##betrtghattkerlabend verpackungahmahm te te teila einfl einfl einflierende add adduff',
'##reisreisviert fairrug ganze ganze ganze veh wz wz wz ihr x ihrverdverdverdverd',
]
```
#### During validation:
Input IDs to encoder:
```
tensor([[ 101, 1037, 3072, ..., 0, 0, 0],
[ 101, 3072, 1030, ..., 0, 0, 0],
[ 101, 2174, 1010, ..., 0, 0, 0],
[ 101, 5262, 1010, ..., 0, 0, 0]])
```
Output of encoder (to demonstrate that there is a difference per input):
```
tensor([[[-0.2494, -0.2050, -0.2032, ..., -1.0734, 0.1397, 0.4336],
[-0.2473, 0.0091, -0.2359, ..., -0.6884, 0.2158, -0.0761],
[-0.5098, -0.1364, 0.7411, ..., -1.0496, -0.0250, -0.2929],
...,
[-0.1039, -0.2547, 0.2264, ..., -0.2483, -0.2153, 0.0748],
[ 0.2561, -0.3465, 0.5167, ..., -0.2460, -0.1611, 0.0155],
[-0.0767, -0.3239, 0.4679, ..., -0.2552, -0.1551, -0.1501]],
[[-0.3001, 0.0428, -0.3463, ..., -0.6265, 0.3733, 0.3856],
[-0.1463, -0.0212, 0.1447, ..., -0.7843, -0.0542, 0.2394],
[ 0.7481, -0.3762, 0.6301, ..., 0.2269, 0.0267, -0.4466],
...,
[ 0.3723, -0.2708, 0.2251, ..., -0.0096, -0.0072, -0.2217],
[ 0.4360, -0.1101, 0.3447, ..., 0.0117, -0.0956, -0.1236],
[ 0.3221, -0.1846, 0.3263, ..., -0.0600, -0.0025, -0.1883]],
[[-0.1365, 0.1746, 0.1038, ..., -0.2151, 0.7875, 0.8574],
[ 0.1072, 0.2133, -0.8644, ..., 0.0739, 1.0464, 0.3385],
[ 0.7204, 0.2680, 0.0991, ..., -0.2964, -0.8238, -0.0604],
...,
[ 0.2686, -0.0701, 0.8973, ..., -0.0366, -0.2160, 0.0276],
[ 0.2265, -0.2171, 0.4239, ..., 0.0833, -0.0573, 0.0297],
[ 0.0690, -0.2430, 0.4186, ..., 0.0897, -0.0287, 0.0762]],
[[ 0.0408, 0.2332, -0.0992, ..., -0.2242, 0.6512, 0.4630],
[ 0.3257, 0.1358, -0.3344, ..., 0.0866, 1.0004, -0.0733],
[ 0.6827, 0.3013, 0.0672, ..., -0.2793, -0.8870, -0.0024],
...,
[ 0.4291, -0.5344, 0.0134, ..., 0.0439, 0.0617, -0.4433],
[ 0.4847, -0.2888, 0.2942, ..., 0.0153, 0.0121, -0.1231],
[ 0.4725, -0.3132, 0.3458, ..., -0.0207, 0.0517, -0.4281]]])
```
Validation reference labels:
```
[
'eine repub @ @ li @ @ kanische strategie, um der wieder @ @ wahl von obama entgegen @ @ zu @ @ treten',
'die fuhrungs @ @ krafte der republi @ @ kaner rechtfertigen ihre politik mit der notwendigkeit, den wahl @ @ betrug zu bekampfen.',
'allerdings halt das brenn @ @ an center letz @ @ teres fur einen my @ @ thos, indem es bekraftigt, dass der wahl @ @ betrug in den usa sel @ @ tener ist als die anzahl der vom bli @ @ tz @ @ schlag geto @ @ teten menschen.',
'die rechtsan @ @ walte der republi @ @ kaner haben in 10 jahren in den usa ubrigens nur 300 falle von wahl @ @ betrug ver @ @ zeichnet.',
]
```
Validation predictions after `.generate()` and `.batch_decode()` (garbage, but the same per input):
```
[
'##schleschleschleschleschleschleschleschleschleschleschleschleschleschleschleschleschleschleschle',
'##schleschleschleschleschleschleschleschleschleschleschleschleschleschleschleschleschleschleschle',
'##schleschleschleschleschleschleschleschleschleschleschleschleschleschleschleschleschleschleschle',
'##schleschleschleschleschleschleschleschleschleschleschleschleschleschleschleschleschleschleschle',
]
```
## Expected behavior
I would expect the model to generate a different output per input, as during training time.
## Thank you for your help!
Hopefully, it is something simple that I am missing. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9686/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9686/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9685 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9685/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9685/comments | https://api.github.com/repos/huggingface/transformers/issues/9685/events | https://github.com/huggingface/transformers/pull/9685 | 789,459,489 | MDExOlB1bGxSZXF1ZXN0NTU3ODAxMDUw | 9,685 | Fix Trainer and Args to mention AdamW, not Adam. | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for opening the PR. As it stands this would break every existing script leveraging the parameters defined, so renaming the parameters is probably not the way to go.\r\n\r\n@sgugger, your insight on this would be very welcome.\r\n\r\n",
"There is no reason to change all the names of the parameters indeed, and it would be a too-heavy breaking change. `AdamW` is not a different optimizer from `Adam`, it's just `Adam` with a different way (some might say the right way) of doing weight decay. I don't think we need to do more than a mention at the beginning of the docstring saying that all mentions of `Adam` are actually about `AdamW`, with a link to the paper.",
"Hi @LysandreJik @sgugger. Thanks for your comments, I'll be changing the variables back.\r\n\r\nI apologize if this is too silly a question, but how can I run and see how the docs look on a browser after the changes?",
"You should check [this page](https://github.com/huggingface/transformers/tree/master/docs#generating-the-documentation) for all the information on generating/writing the documentation :-)",
"I have updated it and also added that by default, the weight decay is applied to all layers except bias and LayerNorm weights while training.",
"@sgugger My code passed only 3 out of 12 checks, I was unable to run CirlceCI properly. Can you point out the reasons why this happened?",
"We are trying to get support from them to understand why, but the checks on your PR were all cancelled. I couldn't retrigger them from our interface either.",
"Hi @sgugger \r\n\r\nI believe a possible reason could be that I followed `transformers` on CircleCI. Maybe it performs checks on my fork of transformers and expects to find some \"resources\" which aren't there.\r\n\r\nI'm not sure how CircleCI works, so this is just a wild guess."
] | 1,611 | 1,612 | 1,611 | CONTRIBUTOR | null | This PR fixed the issue with Docs and labels in Trainer and TrainingArguments Class for AdamW, current version mentions adam in several places.
Fixes #9628
The Trainer class in `trainer.py` uses AdamW as the default optimizer. The TrainingArguments class mentions it as Adam in the documentation, which was confusing.
I have also changed variable names to `adamw_beta1`, `adamw_beta2`, `adamw_epsilon` in `trainer.py`.
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
@LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9685/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9685/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9685",
"html_url": "https://github.com/huggingface/transformers/pull/9685",
"diff_url": "https://github.com/huggingface/transformers/pull/9685.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9685.patch",
"merged_at": 1611161971000
} |
https://api.github.com/repos/huggingface/transformers/issues/9684 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9684/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9684/comments | https://api.github.com/repos/huggingface/transformers/issues/9684/events | https://github.com/huggingface/transformers/pull/9684 | 789,391,040 | MDExOlB1bGxSZXF1ZXN0NTU3NzQxMDU3 | 9,684 | Fix model templates and use less than 119 chars | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | COLLABORATOR | null | # What does this PR do?
This PR fixes the model templates that were broken by #9596 (copies not inline with the original anymore). In passing since I'm a dictator, I've rewritten the warning to take less than 119 chars.
Will merge as soon as CI is green. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9684/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9684/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9684",
"html_url": "https://github.com/huggingface/transformers/pull/9684",
"diff_url": "https://github.com/huggingface/transformers/pull/9684.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9684.patch",
"merged_at": 1611094283000
} |
https://api.github.com/repos/huggingface/transformers/issues/9683 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9683/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9683/comments | https://api.github.com/repos/huggingface/transformers/issues/9683/events | https://github.com/huggingface/transformers/pull/9683 | 789,366,584 | MDExOlB1bGxSZXF1ZXN0NTU3NzE5ODEx | 9,683 | Fix Funnel Transformer conversion script | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Do not miss ```transformers/cammand/convert.py``` for ```transformer-cli``` user.\r\nNeed ```base_model``` arg for ```convert_tf_checkpoint_to_pytorch(self._tf_checkpoint, self._config, self._pytorch_dump_output)```."
] | 1,611 | 1,611 | 1,611 | COLLABORATOR | null | # What does this PR do?
The conversion script was using the wrong kind of model, so wasn't working. I've also added the option to convert the base models.
Fixes #9644
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9683/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9683/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9683",
"html_url": "https://github.com/huggingface/transformers/pull/9683",
"diff_url": "https://github.com/huggingface/transformers/pull/9683.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9683.patch",
"merged_at": 1611154221000
} |
https://api.github.com/repos/huggingface/transformers/issues/9682 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9682/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9682/comments | https://api.github.com/repos/huggingface/transformers/issues/9682/events | https://github.com/huggingface/transformers/pull/9682 | 789,348,988 | MDExOlB1bGxSZXF1ZXN0NTU3NzA0Mzg4 | 9,682 | Add a community page to the docs | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | COLLABORATOR | null | # What does this PR do?
This PR adds a new "community" page in the documentation that aims to gather information about all resources developed by the community. I copied all the community notebooks there, and we have an open PR that will also populate it. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9682/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9682/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9682",
"html_url": "https://github.com/huggingface/transformers/pull/9682",
"diff_url": "https://github.com/huggingface/transformers/pull/9682.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9682.patch",
"merged_at": 1611136477000
} |
https://api.github.com/repos/huggingface/transformers/issues/9681 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9681/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9681/comments | https://api.github.com/repos/huggingface/transformers/issues/9681/events | https://github.com/huggingface/transformers/pull/9681 | 789,335,134 | MDExOlB1bGxSZXF1ZXN0NTU3NjkyNDAy | 9,681 | Restrain tokenizer.model_max_length default | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | COLLABORATOR | null | # What does this PR do?
Apply the same fix to `run_mlm` (when line_by_line is not selected) as we did previously in `run_clm`. Since the tokenizer model_max_length can be excessively large, we should restrain it when no `max_seq_length` is passed.
Fixes #9665 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9681/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9681/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9681",
"html_url": "https://github.com/huggingface/transformers/pull/9681",
"diff_url": "https://github.com/huggingface/transformers/pull/9681.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9681.patch",
"merged_at": 1611134260000
} |
https://api.github.com/repos/huggingface/transformers/issues/9680 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9680/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9680/comments | https://api.github.com/repos/huggingface/transformers/issues/9680/events | https://github.com/huggingface/transformers/issues/9680 | 789,329,278 | MDU6SXNzdWU3ODkzMjkyNzg= | 9,680 | Generating sentence embeddings from pretrained transformers model | {
"login": "Prithvi103",
"id": 12830451,
"node_id": "MDQ6VXNlcjEyODMwNDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/12830451?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Prithvi103",
"html_url": "https://github.com/Prithvi103",
"followers_url": "https://api.github.com/users/Prithvi103/followers",
"following_url": "https://api.github.com/users/Prithvi103/following{/other_user}",
"gists_url": "https://api.github.com/users/Prithvi103/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Prithvi103/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Prithvi103/subscriptions",
"organizations_url": "https://api.github.com/users/Prithvi103/orgs",
"repos_url": "https://api.github.com/users/Prithvi103/repos",
"events_url": "https://api.github.com/users/Prithvi103/events{/privacy}",
"received_events_url": "https://api.github.com/users/Prithvi103/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!"
] | 1,611 | 1,611 | 1,611 | NONE | null | Hi, I have a pretrained BERT based model hosted on huggingface.
https://huggingface.co/microsoft/SportsBERT
How do I generate sentence vectors using this model? I have explored sentence bert but it doesn't allow you to use custom trained models. I have also seen Bert as a client. It works but for my current scenario, I was wondering if there's something which could be done without running a server for converting to vectors. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9680/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9680/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9679 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9679/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9679/comments | https://api.github.com/repos/huggingface/transformers/issues/9679/events | https://github.com/huggingface/transformers/issues/9679 | 789,317,237 | MDU6SXNzdWU3ODkzMTcyMzc= | 9,679 | Visualize self-attention for GLUE task | {
"login": "Nickil21",
"id": 8767964,
"node_id": "MDQ6VXNlcjg3Njc5NjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/8767964?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nickil21",
"html_url": "https://github.com/Nickil21",
"followers_url": "https://api.github.com/users/Nickil21/followers",
"following_url": "https://api.github.com/users/Nickil21/following{/other_user}",
"gists_url": "https://api.github.com/users/Nickil21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Nickil21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nickil21/subscriptions",
"organizations_url": "https://api.github.com/users/Nickil21/orgs",
"repos_url": "https://api.github.com/users/Nickil21/repos",
"events_url": "https://api.github.com/users/Nickil21/events{/privacy}",
"received_events_url": "https://api.github.com/users/Nickil21/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!"
] | 1,611 | 1,611 | 1,611 | NONE | null | Is there a way to visualize the self-attention weights for different spans in a sentence, for instance, a sequence classification task inside [`run_glue.py`](https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_glue.py)?
[See here for a sample](https://imgur.com/a/7gAJvCJ) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9679/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9679/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9678 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9678/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9678/comments | https://api.github.com/repos/huggingface/transformers/issues/9678/events | https://github.com/huggingface/transformers/issues/9678 | 789,226,541 | MDU6SXNzdWU3ODkyMjY1NDE= | 9,678 | bert-base-cased predicts tokens instead of whole words after fine-tuning on fill-mask task | {
"login": "LeandraFichtel",
"id": 33263354,
"node_id": "MDQ6VXNlcjMzMjYzMzU0",
"avatar_url": "https://avatars.githubusercontent.com/u/33263354?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LeandraFichtel",
"html_url": "https://github.com/LeandraFichtel",
"followers_url": "https://api.github.com/users/LeandraFichtel/followers",
"following_url": "https://api.github.com/users/LeandraFichtel/following{/other_user}",
"gists_url": "https://api.github.com/users/LeandraFichtel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LeandraFichtel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LeandraFichtel/subscriptions",
"organizations_url": "https://api.github.com/users/LeandraFichtel/orgs",
"repos_url": "https://api.github.com/users/LeandraFichtel/repos",
"events_url": "https://api.github.com/users/LeandraFichtel/events{/privacy}",
"received_events_url": "https://api.github.com/users/LeandraFichtel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The pipeline for masked filling can only be used to fill one token, so you should be using different code for your evaluation if you want to be able to predict more than one masked token.",
"> The pipeline for masked filling can only be used to fill one token, so you should be using different code for your evaluation if you want to be able to predict more than one masked token.\r\n\r\nThank you for your reply. I am not sure, whether I understand you right. So do you mean, that it is not possible to predict words like \"Los Angeles\" with two words or that it is also not possible to predict words like \"pseudogene\", which are one word but are not in the vocabulary and so the tokenizer splits it into ['pseudo', '##gene']? I would only like to predict words like \"pseudogene\".",
"The pipeline in itself is only coded to return one token to replace the [MASK]. So it won't be able to predict two tokens to replace one [MASK]. The model is also only trained to replace each [MASK] in its sentence by one token, so it won't be able to predict two tokens for one [MASK].\r\n\r\nFor this task, you need to either use a different model (coded yourself as it's not present in the library) or have your training set contain one [MASK] per token you want to mask. For instance if you want to mask all the tokens corresponding to one word (a technique called whole-word masking) what is typically done in training scripts is to replace all parts of one word by [MASK]. For pseudogener tokenized as pseudo, ##gene, that would mean having [MASK] [MASK].\r\n\r\nAlso, this is not a bug of the library, so the discussion should continue on the [forum](https://discuss.huggingface.co/)\r\n\r\n"
] | 1,611 | 1,611 | 1,611 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.1
- Platform: Linux-4.15.0-126-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.1+cu92 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
ray/raytune: @richardliaw @amogkam
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> @mfuntowicz, @sgugger
## Information
Model I am using (Bert, XLNet ...): bert-base-cased
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Extract the [training_data.zip](https://github.com/huggingface/transformers/files/5837438/training_data.zip). The traininig_data is structured like it is explained in [BertForMaskedLM](https://huggingface.co/transformers/model_doc/bert.html#bertformaskedlm).
2. Execute the code for fine-tuning to get the fine-tuned bert-base-cased (first script)
3. Evaluate the fine-tuned bert-base-cased with the code for evaluation (second script)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
#code for fine-tuning of bert-base-cased on fill-mask-task using the files train_queries.json and train_labels.json
from transformers import BertForMaskedLM, Trainer, TrainingArguments
import json
from transformers import BertTokenizer
import torch
import shutil
import os
class MaskedDataset(torch.utils.data.Dataset):
def __init__(self, encodings, labels):
self.encodings = encodings
self.labels = labels
def __getitem__(self, idx):
item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
item['labels'] = torch.tensor(self.labels[idx])
return item
def __len__(self):
return len(self.labels)
if __name__ == "__main__":
#used LM
lm_name = 'bert-base-cased'
model_path = "bert_base_cased_finetuned"
if os.path.exists(model_path):
print("remove dir of model")
shutil.rmtree(model_path)
os.mkdir(model_path)
#pepare training dataset
#read datasets from path
train_queries = json.load(open("train_queries.json", "r"))
train_labels = json.load(open("train_labels.json", "r"))
#use tokenizer to get encodings
tokenizer = BertTokenizer.from_pretrained(lm_name)
train_question_encodings = tokenizer(train_queries, truncation=True, padding='max_length', max_length=256)
train_label_encodings = tokenizer(train_labels, truncation=True, padding='max_length', max_length=256)["input_ids"]
#get final datasets for training
train_dataset = MaskedDataset(train_question_encodings, train_label_encodings)
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=3, # total number of training epochs
per_device_train_batch_size=16, # batch size per device during training
per_device_eval_batch_size=64, # batch size for evaluation
warmup_steps=500, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir=model_path+'/logs', # directory for storing logs
logging_steps=10,
save_total_limit=0
)
model = BertForMaskedLM.from_pretrained(lm_name)
trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset # training dataset
)
trainer.train()
trainer.save_model(model_path)
```
```
#code for evaluating the fine-tuned bert-base-cased
import json
from transformers import pipeline, BertForMaskedLM
from transformers import BertTokenizer
lm_name = "bert-base-cased"
test_queries = {"Rps26p56 is a subclass of [MASK] .": "pseudogene", "[MASK] is the capital of Hammerfest .": "Hammerfest", "Cubaedomus is a [MASK] .": "taxon", "[MASK] is named after Renfrew .": "Renfrew"}
#bert-base-cased with fine-tuning on train_queries.json and train_labels.json
unmasker_finetuned = pipeline('fill-mask', tokenizer= lm_name, model = BertForMaskedLM.from_pretrained("bert_base_cased_finetuned"), device=0, top_k=5)
#bert-base-cased tokenizer
tokenizer = BertTokenizer.from_pretrained(lm_name)
for query in test_queries:
correct_answer = test_queries[query]
#get the answer of the [MASK]-token of bert-base-cased-finetuned
finetuned_result = unmasker_finetuned(query)
finetuned_all_answers = []
for result in finetuned_result:
finetuned_all_answers.append(result["token_str"])
correct_answer_ids = tokenizer(correct_answer)["input_ids"]
correct_answer_tokens = tokenizer.convert_ids_to_tokens(correct_answer_ids)
correct_answer_tokens.remove("[SEP]")
correct_answer_tokens.remove("[CLS]")
print("query:", query)
print("correct answer:", correct_answer)
print("correct answer tokens:", correct_answer_tokens)
print("-----real behavior----------")
print("finetuned all answers:", finetuned_all_answers)
print("finetuned first answer:", finetuned_result[0]["token_str"])
print("-----expected behavior------")
print("finetuned first answer:", correct_answer, "\n")
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
The language model should predict the whole word for the [MASK]-token and not only tokens. In the following, four queries were evaluated with the code for evaluation. For the first two queries, the finetuned language model predicts the correct tokens in the first five answers but does not match them together. For the last two queries, the finetuned language model predicts at least the correct first token but not all tokens.
My guess is, that something went wrong in the training when the word for the [MASK]-token is not in the vocabulary and the tokenizer splits the word into more than one token.
```
query: Rps26p56 is a subclass of [MASK] .
correct answer: pseudogene
correct answer tokens: ['pseudo', '##gene']
-----real behavior----------
finetuned all answers: ['pseudo', 'gene', 'protein', '##gene', 'sub']
finetuned first answer: pseudo
-----expected behavior------
finetuned first answer: pseudogene
query: [MASK] is the capital of Hammerfest .
correct answer: Hammerfest
correct answer tokens: ['Hammer', '##fest']
-----real behavior----------
finetuned all answers: ['Hammer', 'Metal', 'Hell', 'Lock', '##fest']
finetuned first answer: Hammer
-----expected behavior------
finetuned first answer: Hammerfest
query: Cubaedomus is a [MASK] .
correct answer: taxon
correct answer tokens: ['tax', '##on']
-----real behavior----------
finetuned all answers: ['tax', 'genus', 'pseudo', 'synonym', 'is']
finetuned first answer: tax
-----expected behavior------
finetuned first answer: taxon
query: [MASK] is named after Renfrew .
correct answer: Renfrew
correct answer tokens: ['Ren', '##f', '##rew']
-----real behavior----------
finetuned all answers: ['Ren', 'Re', 'R', 'Fe', 'Bo']
finetuned first answer: Ren
-----expected behavior------
finetuned first answer: Renfrew
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9678/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9678/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9677 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9677/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9677/comments | https://api.github.com/repos/huggingface/transformers/issues/9677/events | https://github.com/huggingface/transformers/pull/9677 | 789,210,908 | MDExOlB1bGxSZXF1ZXN0NTU3NTg3MjIw | 9,677 | Use datasets squad_v2 metric in run_qa | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | COLLABORATOR | null | # What does this PR do?
The `run_qa` example script was using a copied and fixed version of the "squad_v2" version while waiting for the fix to be merged and released in datasets. That is now the case, so removing the band-aid and adjusting the version of datasets in requirements.
Fixes #9620
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9677/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9677/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9677",
"html_url": "https://github.com/huggingface/transformers/pull/9677",
"diff_url": "https://github.com/huggingface/transformers/pull/9677.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9677.patch",
"merged_at": 1611136333000
} |
https://api.github.com/repos/huggingface/transformers/issues/9676 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9676/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9676/comments | https://api.github.com/repos/huggingface/transformers/issues/9676/events | https://github.com/huggingface/transformers/pull/9676 | 789,082,442 | MDExOlB1bGxSZXF1ZXN0NTU3NDgwMTM2 | 9,676 | Fix GPT conversion script | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | COLLABORATOR | null | # What does this PR do?
One forgotten file in #9674, sorry about that! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9676/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9676/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9676",
"html_url": "https://github.com/huggingface/transformers/pull/9676",
"diff_url": "https://github.com/huggingface/transformers/pull/9676.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9676.patch",
"merged_at": 1611068138000
} |
https://api.github.com/repos/huggingface/transformers/issues/9675 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9675/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9675/comments | https://api.github.com/repos/huggingface/transformers/issues/9675/events | https://github.com/huggingface/transformers/pull/9675 | 789,078,351 | MDExOlB1bGxSZXF1ZXN0NTU3NDc2Njg4 | 9,675 | Fix old Seq2SeqTrainer | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | COLLABORATOR | null | # What does this PR do?
Removes the reference to the `_actual_model` method that was removed recently in the old `Seq2SeqTrainer`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9675/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9675/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9675",
"html_url": "https://github.com/huggingface/transformers/pull/9675",
"diff_url": "https://github.com/huggingface/transformers/pull/9675.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9675.patch",
"merged_at": 1611068186000
} |
https://api.github.com/repos/huggingface/transformers/issues/9674 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9674/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9674/comments | https://api.github.com/repos/huggingface/transformers/issues/9674/events | https://github.com/huggingface/transformers/pull/9674 | 789,072,044 | MDExOlB1bGxSZXF1ZXN0NTU3NDcxNDEz | 9,674 | Fix imports in conversion scripts | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | COLLABORATOR | null | # What does this PR do?
During the rework of the new init for fast imports, all absolute imports were switched to relative ones indiscriminately (because they usually don't work anymore for the core of the lib). However, the conversion scripts are supposed to be executed as scripts and relative imports can't work there (that's how Python works). This PR fixes those, and it seems that it doesn't hurt the transformers-cli convert command (which import things from those modules). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9674/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9674/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9674",
"html_url": "https://github.com/huggingface/transformers/pull/9674",
"diff_url": "https://github.com/huggingface/transformers/pull/9674.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9674.patch",
"merged_at": 1611067215000
} |
https://api.github.com/repos/huggingface/transformers/issues/9673 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9673/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9673/comments | https://api.github.com/repos/huggingface/transformers/issues/9673/events | https://github.com/huggingface/transformers/pull/9673 | 789,047,964 | MDExOlB1bGxSZXF1ZXN0NTU3NDUxMTI0 | 9,673 | add mbart to automodel for masked lm | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #9653
Bart and MBart are the only Encoder-Decoder models that can do mask-filling -> so add MBart also to `AutoModelForMaskedLM`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9673/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9673/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9673",
"html_url": "https://github.com/huggingface/transformers/pull/9673",
"diff_url": "https://github.com/huggingface/transformers/pull/9673.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9673.patch",
"merged_at": 1611065951000
} |
https://api.github.com/repos/huggingface/transformers/issues/9672 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9672/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9672/comments | https://api.github.com/repos/huggingface/transformers/issues/9672/events | https://github.com/huggingface/transformers/issues/9672 | 788,913,441 | MDU6SXNzdWU3ODg5MTM0NDE= | 9,672 | AttributeError: 'Seq2SeqTrainer' object has no attribute '_actual_model' | {
"login": "caralen",
"id": 26584578,
"node_id": "MDQ6VXNlcjI2NTg0NTc4",
"avatar_url": "https://avatars.githubusercontent.com/u/26584578?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/caralen",
"html_url": "https://github.com/caralen",
"followers_url": "https://api.github.com/users/caralen/followers",
"following_url": "https://api.github.com/users/caralen/following{/other_user}",
"gists_url": "https://api.github.com/users/caralen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/caralen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/caralen/subscriptions",
"organizations_url": "https://api.github.com/users/caralen/orgs",
"repos_url": "https://api.github.com/users/caralen/repos",
"events_url": "https://api.github.com/users/caralen/events{/privacy}",
"received_events_url": "https://api.github.com/users/caralen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @caralen \r\n\r\nthe `Seq2SeqTrainer` is now integrated with the main lib, now it's under `src/trainer_seq2seq.py`, and the seq2seq_trainer in examples is about to be deprecated, this bug is fixed in the new version, so I would recommend you to use the new`Seq2SeqTrainer` from the lib rather than examples folder,\r\n\r\nyou could directly import it from transformers using\r\n```python\r\nfrom transformers import Seq2SeqTrainer\r\n```",
"Hi @patil-suraj, thanks for the quick reply. I will close this issue now."
] | 1,611 | 1,611 | 1,611 | NONE | null | _actual_model method is not defined in the Seq2SeqTrainer class, nor in the Trainer class from which is derived
https://github.com/huggingface/transformers/blob/12c1b5b8f448d652f5e1fa0f069b9569f4540948/examples/seq2seq/seq2seq_trainer.py#L63 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9672/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9672/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9671 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9671/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9671/comments | https://api.github.com/repos/huggingface/transformers/issues/9671/events | https://github.com/huggingface/transformers/issues/9671 | 788,896,093 | MDU6SXNzdWU3ODg4OTYwOTM= | 9,671 | How to enable tokenizer padding option in feature extraction pipeline? | {
"login": "bowang-rw-02",
"id": 62551515,
"node_id": "MDQ6VXNlcjYyNTUxNTE1",
"avatar_url": "https://avatars.githubusercontent.com/u/62551515?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bowang-rw-02",
"html_url": "https://github.com/bowang-rw-02",
"followers_url": "https://api.github.com/users/bowang-rw-02/followers",
"following_url": "https://api.github.com/users/bowang-rw-02/following{/other_user}",
"gists_url": "https://api.github.com/users/bowang-rw-02/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bowang-rw-02/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bowang-rw-02/subscriptions",
"organizations_url": "https://api.github.com/users/bowang-rw-02/orgs",
"repos_url": "https://api.github.com/users/bowang-rw-02/repos",
"events_url": "https://api.github.com/users/bowang-rw-02/events{/privacy}",
"received_events_url": "https://api.github.com/users/bowang-rw-02/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! I think you're looking for `padding=\"longest\"`?",
"Your result if of length 512 because you asked `padding=\"max_length\"`, and the tokenizer max length is 512. If you ask for `\"longest\"`, it will pad up to the longest value in your batch:\r\n\r\n```py\r\n>>> text = \"After stealing money from the bank vault, the bank robber was seen fishing on the Mississippi river bank.\"\r\n... features = nlp([text, text * 2], padding=\"longest\", truncation=True, max_length=40)\r\n```\r\n\r\nreturns features which are of size [42, 768].",
"> Your result if of length 512 because you asked `padding=\"max_length\"`, and the tokenizer max length is 512. If you ask for `\"longest\"`, it will pad up to the longest value in your batch:\r\n> \r\n> ```python\r\n> >>> text = \"After stealing money from the bank vault, the bank robber was seen fishing on the Mississippi river bank.\"\r\n> ... features = nlp([text, text * 2], padding=\"longest\", truncation=True, max_length=40)\r\n> ```\r\n> \r\n> returns features which are of size [42, 768].\r\n\r\nThank you very much! This method works! And I think the 'longest' padding strategy is enough for me to use in my dataset.\r\nBut I just wonder that can I specify a fixed padding size? Like all sentence could be padded to length 40?\r\nBecause in my former 'inconvenient general method', I just use\r\n```\r\ntokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')\r\ntext = 'After stealing money from the bank vault, the bank robber was seen fishing on the Mississippi river bank.'\r\n\r\nencoded_input = tokenizer(text, padding='max_length', truncation=True, max_length=40)\r\n```\r\nand get the fixed size padding sentence though... \r\n(I found this method from the official documentation [https://huggingface.co/transformers/preprocessing.html#everything-you-always-wanted-to-know-about-padding-and-truncation](url)",
"Well it seems impossible for now... I just tried\r\n```\r\ntext = \"After stealing money from the bank vault, the bank robber was seen \" \\\r\n \"fishing on the Mississippi river bank.\"\r\nfeatures = nlp(text, padding='length', truncation=True, length=40)\r\n```\r\nAnd the error message showed that:\r\n**ValueError: 'length' is not a valid PaddingStrategy, please select one of ['longest', 'max_length', 'do_not_pad']**\r\nAnyway, thank you very much!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,611 | 1,619 | 1,619 | NONE | null | I am trying to use our pipeline() to extract features of sentence tokens.
Because the lengths of my sentences are not same, and I am then going to feed the token features to RNN-based models, I want to padding sentences to a fixed length to get the same size features.
Before knowing our convenient pipeline() method, I am using a general version to get the features, which works fine but inconvenient, like that:
```
import torch
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
text = 'After stealing money from the bank vault, the bank robber was seen fishing on the Mississippi river bank.'
encoded_input = tokenizer(text, padding='max_length', truncation=True, max_length=40)
indexed_tokens = encoded_input['input_ids']
segments_ids = encoded_input['token_type_ids']
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])
model = AutoModel.from_pretrained('bert-base-uncased', output_hidden_states=True)
model.eval()
with torch.no_grad():
outputs = model(tokens_tensor, segments_tensors)
hidden_states = outputs[2]
```
Then I also need to merge (or select) the features from returned **hidden_states** by myself... and finally get a [40,768] padded feature for this sentence's tokens as I want. However, as you can see, it is very inconvenient.
Compared to that, the pipeline method works very well and easily, which only needs the following 5-line codes.
```
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
model = AutoModel.from_pretrained('bert-base-uncased', output_hidden_states=True)
nlp = pipeline('feature-extraction', model=model, tokenizer=tokenizer)
text = "After stealing money from the bank vault, the bank robber was seen fishing on the Mississippi river bank."
features = nlp(text)
```
Then I can directly get the tokens' features of original (length) sentence, which is [22,768].
**However, how can I enable the padding option of the tokenizer in pipeline?**
As I saw #9432 and #9576 , I knew that now we can add truncation options to the pipeline object (here is called **nlp**), so I imitated and wrote this code:
```
text = "After stealing money from the bank vault, the bank robber was seen fishing on the Mississippi river bank."
features = nlp(text, padding='max_length', truncation=True, max_length=40)
```
The program did not throw me an error though, but just return me a [512,768] vector...?
So is there any method to correctly enable the padding options? Thank you! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9671/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9671/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9670 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9670/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9670/comments | https://api.github.com/repos/huggingface/transformers/issues/9670/events | https://github.com/huggingface/transformers/issues/9670 | 788,843,889 | MDU6SXNzdWU3ODg4NDM4ODk= | 9,670 | bert_tokenizer.decode(bert_tokenizer.encode(sentence))!=sentence | {
"login": "youngornever",
"id": 34613489,
"node_id": "MDQ6VXNlcjM0NjEzNDg5",
"avatar_url": "https://avatars.githubusercontent.com/u/34613489?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/youngornever",
"html_url": "https://github.com/youngornever",
"followers_url": "https://api.github.com/users/youngornever/followers",
"following_url": "https://api.github.com/users/youngornever/following{/other_user}",
"gists_url": "https://api.github.com/users/youngornever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/youngornever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/youngornever/subscriptions",
"organizations_url": "https://api.github.com/users/youngornever/orgs",
"repos_url": "https://api.github.com/users/youngornever/repos",
"events_url": "https://api.github.com/users/youngornever/events{/privacy}",
"received_events_url": "https://api.github.com/users/youngornever/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! This is a normal behavior of the BERT tokenizer. You can add the tokens you do not wish to see split to the vocabulary:\r\n```py\r\n>>> tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')\r\n... tokenizer.add_tokens([\"no_passages_used\"]) # <--------------------------------- Here\r\n... paragraph = \"no_passages_used knowledge no_passages_used\"\r\n... print(tokenizer.encode(paragraph))\r\n... print(tokenizer.decode(tokenizer.encode(paragraph)))\r\n[101, 30522, 3716, 30522, 102]\r\n[CLS] no_passages_used knowledge no_passages_used [SEP]\r\n```",
"Don't forget to resize the embedding matrix of your model if you add new tokens to the vocabulary: [docs for add_tokens method](https://huggingface.co/transformers/internal/tokenization_utils.html?highlight=resize_token_embeddings#transformers.tokenization_utils_base.SpecialTokensMixin.add_tokens)"
] | 1,611 | 1,611 | 1,611 | NONE | null | from transformers import AutoTokenizer # transformers==4.2.1
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
paragraph = "no_passages_used __knowledge__ no_passages_used"
print(tokenizer.encode(paragraph))
print(tokenizer.decode(tokenizer.encode(paragraph)))
"""
>[101, 2053, 1035, 13768, 1035, 2109, 1035, 1035, 3716, 1035, 1035, 2053, 1035, 13768, 1035, 2109, 102]
>[CLS] no _ passages _ used _ _ knowledge _ _ no _ passages _ used [SEP]
""" | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9670/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9670/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9669 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9669/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9669/comments | https://api.github.com/repos/huggingface/transformers/issues/9669/events | https://github.com/huggingface/transformers/pull/9669 | 788,794,997 | MDExOlB1bGxSZXF1ZXN0NTU3MjQwNDgw | 9,669 | [Bart-like tests] Fix torch device for bart tests | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes failing circle ci on GPU due to this commit https://github.com/huggingface/transformers/commit/357fb1c5d8b6a16f042f9b504f023d935086e8e5
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9669/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9669/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9669",
"html_url": "https://github.com/huggingface/transformers/pull/9669",
"diff_url": "https://github.com/huggingface/transformers/pull/9669.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9669.patch",
"merged_at": 1611043585000
} |
https://api.github.com/repos/huggingface/transformers/issues/9668 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9668/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9668/comments | https://api.github.com/repos/huggingface/transformers/issues/9668/events | https://github.com/huggingface/transformers/issues/9668 | 788,675,954 | MDU6SXNzdWU3ODg2NzU5NTQ= | 9,668 | Cannot compile tokenizers on PowerPC 9 while installing transformers | {
"login": "kmeng01",
"id": 13970922,
"node_id": "MDQ6VXNlcjEzOTcwOTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/13970922?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kmeng01",
"html_url": "https://github.com/kmeng01",
"followers_url": "https://api.github.com/users/kmeng01/followers",
"following_url": "https://api.github.com/users/kmeng01/following{/other_user}",
"gists_url": "https://api.github.com/users/kmeng01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kmeng01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kmeng01/subscriptions",
"organizations_url": "https://api.github.com/users/kmeng01/orgs",
"repos_url": "https://api.github.com/users/kmeng01/repos",
"events_url": "https://api.github.com/users/kmeng01/events{/privacy}",
"received_events_url": "https://api.github.com/users/kmeng01/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! Could you open an issue on the [tokenizers](https://github.com/huggingface/tokenizers) repository instead? @n1t0 will probably know what's up!",
"Done! https://github.com/huggingface/tokenizers/issues/604",
"This is actually a transformers problem I think. It's the old versions of tokenizers imported using a path that has since become private. It's fixed in the newer versions, but transformers is still pinned to the old version: https://github.com/huggingface/transformers/issues/9649",
"Ah, installing something newer, e.g. `transformers==4.2.2`, has fixed it. Thanks so much!"
] | 1,611 | 1,611 | 1,611 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.4.0
- Platform: PowerPC 9
- Python version: 3.7.7
- PyTorch version (GPU?): 1.5 w/ GPU
- Tensorflow version (GPU?): n/a
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
ray/raytune: @richardliaw @amogkam
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@mfuntowicz
## Information
I am trying to install `transformers==3.4.0` on an PowerPC 9 system. It's an IBM compute rig.
## To reproduce
Steps to reproduce the behavior:
1. Create new `conda` environment with python 3.7
2. Run `pip install transformers==3.4.0` (the version that I need)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
Compiling tokenizers v0.10.1 (/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/tokenizers-lib)
Running `rustc --crate-name tokenizers --edition=2018 tokenizers-lib/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no -C metadata=204c4d103d08e9e3 -C extra-filename=-204c4d103d08e9e3 --out-dir /tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps -L dependency=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps --extern clap=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libclap-b8e428690762cf7e.rmeta --extern derive_builder=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libderive_builder-247f4f57ff4bf4c7.so --extern esaxx_rs=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libesaxx_rs-28ce6f8a8d31c937.rmeta --extern indicatif=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libindicatif-280a1d33f346e384.rmeta --extern itertools=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libitertools-759131012594af62.rmeta --extern lazy_static=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/liblazy_static-0f749853bc34e9e0.rmeta --extern log=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/liblog-12a018fba7f0b36d.rmeta --extern onig=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libonig-3ca2736cdef653d2.rmeta --extern rand=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/librand-52622a6339ec540d.rmeta --extern rayon=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/librayon-f4508233e0c77565.rmeta --extern rayon_cond=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/librayon_cond-d89d0c7f0a1d1a11.rmeta --extern regex=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libregex-dbb55ca763c16a0e.rmeta --extern regex_syntax=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libregex_syntax-c7a8a1f28fe982ac.rmeta --extern serde=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libserde-11e7f5f85ab52b72.rmeta --extern serde_json=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libserde_json-477c52136da5fafe.rmeta --extern spm_precompiled=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libspm_precompiled-39a90f21c16965ef.rmeta --extern unicode_normalization_alignments=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libunicode_normalization_alignments-157a660dec7f1476.rmeta --extern unicode_segmentation=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libunicode_segmentation-66856f91381ae1a4.rmeta --extern unicode_categories=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libunicode_categories-209e6f430e5d88d1.rmeta -L native=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/build/esaxx-rs-62ba703c44f19ac6/out -L native=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/build/onig_sys-091ecfe4b66243c7/out`
error[E0603]: module `export` is private
--> tokenizers-lib/src/tokenizer/mod.rs:24:12
|
24 | use serde::export::Formatter;
| ^^^^^^ private module
|
note: the module `export` is defined here
--> /home/mengk/.cargo/registry/src/github.com-1ecc6299db9ec823/serde-1.0.119/src/lib.rs:275:5
|
275 | use self::__private as export;
| ^^^^^^^^^^^^^^^^^^^^^^^^^
error: aborting due to previous error
For more information about this error, try `rustc --explain E0603`.
error: could not compile `tokenizers`.
Caused by:
process didn't exit successfully: `rustc --crate-name tokenizers --edition=2018 tokenizers-lib/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no -C metadata=204c4d103d08e9e3 -C extra-filename=-204c4d103d08e9e3 --out-dir /tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps -L dependency=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps --extern clap=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libclap-b8e428690762cf7e.rmeta --extern derive_builder=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libderive_builder-247f4f57ff4bf4c7.so --extern esaxx_rs=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libesaxx_rs-28ce6f8a8d31c937.rmeta --extern indicatif=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libindicatif-280a1d33f346e384.rmeta --extern itertools=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libitertools-759131012594af62.rmeta --extern lazy_static=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/liblazy_static-0f749853bc34e9e0.rmeta --extern log=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/liblog-12a018fba7f0b36d.rmeta --extern onig=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libonig-3ca2736cdef653d2.rmeta --extern rand=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/librand-52622a6339ec540d.rmeta --extern rayon=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/librayon-f4508233e0c77565.rmeta --extern rayon_cond=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/librayon_cond-d89d0c7f0a1d1a11.rmeta --extern regex=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libregex-dbb55ca763c16a0e.rmeta --extern regex_syntax=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libregex_syntax-c7a8a1f28fe982ac.rmeta --extern serde=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libserde-11e7f5f85ab52b72.rmeta --extern serde_json=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libserde_json-477c52136da5fafe.rmeta --extern spm_precompiled=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libspm_precompiled-39a90f21c16965ef.rmeta --extern unicode_normalization_alignments=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libunicode_normalization_alignments-157a660dec7f1476.rmeta --extern unicode_segmentation=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libunicode_segmentation-66856f91381ae1a4.rmeta --extern unicode_categories=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libunicode_categories-209e6f430e5d88d1.rmeta -L native=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/build/esaxx-rs-62ba703c44f19ac6/out -L native=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/build/onig_sys-091ecfe4b66243c7/out` (exit code: 1)
cargo rustc --lib --manifest-path Cargo.toml --features pyo3/extension-module --release --verbose -- --crate-type cdylib
error: cargo failed with code: 101
----------------------------------------
ERROR: Failed building wheel for tokenizers
Failed to build tokenizers
ERROR: Could not build wheels for tokenizers which use PEP 517 and cannot be installed directly
```
## Expected behavior
There shouldn't be any error messages.
Sidenote: before getting this, I had an error complaining that I didn't have rust installed, but I did so using the command given on the official website.
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9668/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9668/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9667 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9667/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9667/comments | https://api.github.com/repos/huggingface/transformers/issues/9667/events | https://github.com/huggingface/transformers/pull/9667 | 788,576,398 | MDExOlB1bGxSZXF1ZXN0NTU3MDYyMjM0 | 9,667 | Add new model docs | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,612 | 1,612 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR adds more information on how to add a model to Transformers docs.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
## UPDATE
The `model_doc/add_new_model.rst` is now finished for a first merge IMO. It would be amazing if @LysandreJik @sgugger you could review the file real quick again - I tried to add all of your suggestions. Also, I added a diagram showing the model design of Transformers - which was not reviewed yet. Note that I did not add a clear design for Tokenizers since it takes a lot of time to do so and I want to iteratively improve this step-by-step explanation. The first model, for which I'd like to mentor someone from the community would also be BigBird which does not need a new tokenizer.
In addition, I would be extremely grateful if @stas00 @abhishekkrthakur @patil-suraj @stefan-it @NielsRogge you have 10 minutes review the `model_doc/add_model.rst` file for possible improvements since you guys just recently added a new model. Your feedback would be especially useful since you might have a much more "unbiased" view what is difficult/easy when adding a model. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9667/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9667/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9667",
"html_url": "https://github.com/huggingface/transformers/pull/9667",
"diff_url": "https://github.com/huggingface/transformers/pull/9667.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9667.patch",
"merged_at": 1612191311000
} |
https://api.github.com/repos/huggingface/transformers/issues/9666 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9666/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9666/comments | https://api.github.com/repos/huggingface/transformers/issues/9666/events | https://github.com/huggingface/transformers/issues/9666 | 788,564,570 | MDU6SXNzdWU3ODg1NjQ1NzA= | 9,666 | Fine-tuning LM with NSP | {
"login": "ahmedkotb98",
"id": 42472093,
"node_id": "MDQ6VXNlcjQyNDcyMDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/42472093?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahmedkotb98",
"html_url": "https://github.com/ahmedkotb98",
"followers_url": "https://api.github.com/users/ahmedkotb98/followers",
"following_url": "https://api.github.com/users/ahmedkotb98/following{/other_user}",
"gists_url": "https://api.github.com/users/ahmedkotb98/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahmedkotb98/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahmedkotb98/subscriptions",
"organizations_url": "https://api.github.com/users/ahmedkotb98/orgs",
"repos_url": "https://api.github.com/users/ahmedkotb98/repos",
"events_url": "https://api.github.com/users/ahmedkotb98/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahmedkotb98/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @ahmedkotb98 \r\n\r\nWe would love to help, but it would be better if you could only post the relevant minimal code snippet to reproduce the issue, rather than a bunch of scripts.\r\n\r\nAlso, it's better to ask such type of questions on the [forum](https://discuss.huggingface.co/) first. Here's our guide on [how to request support](https://discuss.huggingface.co/t/how-to-request-support/3128).\r\n\r\nThanks.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,611 | 1,619 | 1,619 | NONE | null | Environment info
transformers-4.2.1
PyTorch
tokenizers-0.9.4
sentencepiece-0.1.95
when finetuning bert my script run as well but not complete the running and because of this error Traceback (most recent call last):
File "/content/run.py", line 757, in <module>
main()
File "/content/run.py", line 656, in main
labels=lm_label_ids, next_sentence_label=is_next)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/bert/modeling_bert.py", line 1065, in forward
return_dict=return_dict,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/bert/modeling_bert.py", line 968, in forward
return_dict=return_dict,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/bert/modeling_bert.py", line 566, in forward
output_attentions,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/bert/modeling_bert.py", line 460, in forward
past_key_value=self_attn_past_key_value,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/bert/modeling_bert.py", line 393, in forward
output_attentions,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/bert/modeling_bert.py", line 314, in forward
attention_probs = nn.Softmax(dim=-1)(attention_scores)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/activation.py", line 1198, in forward
return F.softmax(input, self.dim, _stacklevel=5)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 1512, in softmax
ret = input.softmax(dim)
RuntimeError: CUDA error: device-side assert `triggered
++++++++++++++and my code is here+++++++++++++++++
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import logging
import argparse
#from tqdm import tqdm
#from tqdm import trange
from tqdm import notebook , trange
import numpy as np
import torch
from torch.utils.data import DataLoader, RandomSampler , SequentialSampler
from torch.utils.data.distributed import DistributedSampler
#from pytorch_pretrained_bert.tokenization import BertTokenizer
#from pytorch_pretrained_bert.modeling import BertForPreTraining
from transformers import BertTokenizer, BertForPreTraining
#from pytorch_pretrained_bert.optimization import BertAdam
from transformers import XLNetTokenizer
from transformers import AdamW, get_linear_schedule_with_warmup
#from transformers import BertForPreTraining
import sentencepiece as spm
from torch.utils.data import Dataset
import random
logging.basicConfig(format='%(asctime)s - %(levelname)s - %(name)s - %(message)s',
datefmt='%m/%d/%Y %H:%M:%S',
level=logging.INFO)
logger = logging.getLogger(__name__)
def warmup_linear(x, warmup=0.002):
if x < warmup:
return x / warmup
return 1.0 - x
def accuracy(out, labels, total_test):
class_preds = out.data.cpu().numpy().argmax(axis=-1)
labels = labels.data.cpu().numpy()
return np.sum(class_preds == labels) / total_test
class BERTDataset(Dataset):
def __init__(self, corpus_path, tokenizer, seq_len, encoding="utf-8", corpus_lines=None, on_memory=True):
self.vocab = tokenizer.get_vocab()
self.tokenizer = tokenizer
self.seq_len = seq_len
self.on_memory = on_memory
self.corpus_lines = corpus_lines # number of non-empty lines in input corpus
self.corpus_path = corpus_path
self.encoding = encoding
self.current_doc = 0 # to avoid random sentence from same doc
# for loading samples directly from file
self.sample_counter = 0 # used to keep track of full epochs on file
self.line_buffer = None # keep second sentence of a pair in memory and use as first sentence in next pair
# for loading samples in memory
self.current_random_doc = 0
self.num_docs = 0
self.sample_to_doc = [] # map sample index to doc and line
# load samples into memory
if on_memory:
self.all_docs = []
doc = []
self.corpus_lines = 0
with open(corpus_path, "r", encoding=encoding) as f:
for line in notebook.tqdm(f, desc="Loading Dataset", total=corpus_lines):
line = line.strip()
if line == "":
self.all_docs.append(doc)
doc = []
# remove last added sample because there won't be a subsequent line anymore in the doc
self.sample_to_doc.pop()
else:
# store as one sample
sample = {"doc_id": len(self.all_docs),
"line": len(doc)}
self.sample_to_doc.append(sample)
doc.append(line)
self.corpus_lines = self.corpus_lines + 1
# if last row in file is not empty
if self.all_docs[-1] != doc:
self.all_docs.append(doc)
self.sample_to_doc.pop()
self.num_docs = len(self.all_docs)
# load samples later lazily from disk
else:
if self.corpus_lines is None:
with open(corpus_path, "r", encoding=encoding) as f:
self.corpus_lines = 0
for line in notebook.tqdm(f, desc="Loading Dataset", total=corpus_lines):
if line.strip() == "":
self.num_docs += 1
else:
self.corpus_lines += 1
# if doc does not end with empty line
if line.strip() != "":
self.num_docs += 1
self.file = open(corpus_path, "r", encoding=encoding)
self.random_file = open(corpus_path, "r", encoding=encoding)
def __len__(self):
# last line of doc won't be used, because there's no "nextSentence". Additionally, we start counting at 0.
return self.corpus_lines - self.num_docs - 1
def __getitem__(self, item):
cur_id = self.sample_counter
self.sample_counter += 1
if not self.on_memory:
# after one epoch we start again from beginning of file
if cur_id != 0 and (cur_id % len(self) == 0):
self.file.close()
self.file = open(self.corpus_path, "r", encoding=self.encoding)
t1, t2, is_next_label = self.random_sent(item)
# tokenize
tokens_a = self.tokenizer.tokenize(t1)
tokens_b = self.tokenizer.tokenize(t2)
# combine to one sample
cur_example = InputExample(guid=cur_id, tokens_a=tokens_a, tokens_b=tokens_b, is_next=is_next_label)
# transform sample to features
cur_features = convert_example_to_features(cur_example, self.seq_len, self.tokenizer)
cur_tensors = (torch.tensor(cur_features.input_ids),
torch.tensor(cur_features.input_mask),
torch.tensor(cur_features.segment_ids),
torch.tensor(cur_features.lm_label_ids),
torch.tensor(cur_features.is_next))
return cur_tensors
def random_sent(self, index):
"""
Get one sample from corpus consisting of two sentences. With prob. 50% these are two subsequent sentences
from one doc. With 50% the second sentence will be a random one from another doc.
:param index: int, index of sample.
:return: (str, str, int), sentence 1, sentence 2, isNextSentence Label
"""
t1, t2 = self.get_corpus_line(index)
if random.random() > 0.5:
label = 0
else:
t2 = self.get_random_line()
label = 1
assert len(t1) > 0
assert len(t2) > 0
return t1, t2, label
def get_corpus_line(self, item):
"""
Get one sample from corpus consisting of a pair of two subsequent lines from the same doc.
:param item: int, index of sample.
:return: (str, str), two subsequent sentences from corpus
"""
t1 = ""
t2 = ""
assert item < self.corpus_lines
if self.on_memory:
sample = self.sample_to_doc[item]
t1 = self.all_docs[sample["doc_id"]][sample["line"]]
t2 = self.all_docs[sample["doc_id"]][sample["line"] + 1]
# used later to avoid random nextSentence from same doc
self.current_doc = sample["doc_id"]
return t1, t2
else:
if self.line_buffer is None:
# read first non-empty line of file
while t1 == "":
t1 = self.file.__next__().strip()
t2 = self.file.__next__().strip()
else:
# use t2 from previous iteration as new t1
t1 = self.line_buffer
t2 = self.file.__next__().strip()
# skip empty rows that are used for separating documents and keep track of current doc id
while t2 == "" or t1 == "":
t1 = self.file.__next__().strip()
t2 = self.file.__next__().strip()
self.current_doc = self.current_doc + 1
self.line_buffer = t2
assert t1 != ""
assert t2 != ""
return t1, t2
def get_random_line(self):
"""
Get random line from another document for nextSentence task.
:return: str, content of one line
"""
# Similar to original tf repo: This outer loop should rarely go for more than one iteration for large
# corpora. However, just to be careful, we try to make sure that
# the random document is not the same as the document we're processing.
for _ in range(10):
if self.on_memory:
rand_doc_idx = random.randint(0, len(self.all_docs) - 1)
rand_doc = self.all_docs[rand_doc_idx]
line = rand_doc[random.randrange(len(rand_doc))]
else:
rand_index = random.randint(1, self.corpus_lines if self.corpus_lines < 1000 else 1000)
# pick random line
for _ in range(rand_index):
line = self.get_next_line()
# check if our picked random line is really from another doc like we want it to be
if self.current_random_doc != self.current_doc:
break
return line
def get_next_line(self):
""" Gets next line of random_file and starts over when reaching end of file"""
try:
line = self.random_file.__next__().strip()
# keep track of which document we are currently looking at to later avoid having the same doc as t1
if line == "":
self.current_random_doc = self.current_random_doc + 1
line = self.random_file.__next__().strip()
except StopIteration:
self.random_file.close()
self.random_file = open(self.corpus_path, "r", encoding=self.encoding)
line = self.random_file.__next__().strip()
return line
class InputExample(object):
"""A single training/test example for the language model."""
def __init__(self, guid, tokens_a, tokens_b=None, is_next=None, lm_labels=None):
"""Constructs a InputExample.
Args:
guid: Unique id for the example.
tokens_a: string. The untokenized text of the first sequence. For single
sequence tasks, only this sequence must be specified.
tokens_b: (Optional) string. The untokenized text of the second sequence.
Only must be specified for sequence pair tasks.
label: (Optional) string. The label of the example. This should be
specified for train and dev examples, but not for test examples.
"""
self.guid = guid
self.tokens_a = tokens_a
self.tokens_b = tokens_b
self.is_next = is_next # nextSentence
self.lm_labels = lm_labels # masked words for language model
class InputFeatures(object):
"""A single set of features of data."""
def __init__(self, input_ids, input_mask, segment_ids, is_next, lm_label_ids):
self.input_ids = input_ids
self.input_mask = input_mask
self.segment_ids = segment_ids
self.is_next = is_next
self.lm_label_ids = lm_label_ids
def random_word(tokens, tokenizer):
"""
Masking some random tokens for Language Model task with probabilities as in the original BERT paper.
:param tokens: list of str, tokenized sentence.
:param tokenizer: Tokenizer, object used for tokenization (we need it's vocab here)
:return: (list of str, list of int), masked tokens and related labels for LM prediction
"""
output_label = []
for i, token in enumerate(tokens):
prob = random.random()
# mask token with 15% probability
if prob < 0.15:
prob /= 0.15
# 80% randomly change token to mask token
if prob < 0.8:
tokens[i] = "[MASK]"
# 10% randomly change token to random token
elif prob < 0.9:
tokens[i] = random.choice(list(tokenizer.get_vocab()))
# -> rest 10% randomly keep current token
# append current token to output (we will predict these later)
try:
output_label.append(tokenizer.convert_tokens_to_ids(token))
except KeyError:
# For unknown words (should not occur with BPE vocab)
output_label.append(tokenizer.convert_tokens_to_ids("[UNK]"))
logger.warning("Cannot find token '{}' in vocab. Using [UNK] insetad".format(token))
else:
# no masking token (will be ignored by loss function later)
output_label.append(-100)
return tokens, output_label
def convert_example_to_features(example, max_seq_length, tokenizer):
"""
Convert a raw sample (pair of sentences as tokenized strings) into a proper training sample with
IDs, LM labels, input_mask, CLS and SEP tokens etc.
:param example: InputExample, containing sentence input as strings and is_next label
:param max_seq_length: int, maximum length of sequence.
:param tokenizer: Tokenizer
:return: InputFeatures, containing all inputs and labels of one sample as IDs (as used for model training)
"""
tokens_a = example.tokens_a
tokens_b = example.tokens_b
# Modifies `tokens_a` and `tokens_b` in place so that the total
# length is less than the specified length.
# Account for [CLS], [SEP], [SEP] with "- 3"
_truncate_seq_pair(tokens_a, tokens_b, max_seq_length - 3)
t1_random, t1_label = random_word(tokens_a, tokenizer)
t2_random, t2_label = random_word(tokens_b, tokenizer)
# concatenate lm labels and account for CLS, SEP, SEP
cls_id = tokenizer.convert_tokens_to_ids(["[CLS]"])[0]
sep_id = tokenizer.convert_tokens_to_ids(["[SEP]"])[0]
pad_id = tokenizer.convert_tokens_to_ids(["[PAD]"])[0]
lm_label_ids = ([cls_id] + t1_label + [sep_id] + t2_label + [sep_id])
# The convention in BERT is:
# (a) For sequence pairs:
# tokens: [CLS] is this jack ##son ##ville ? [SEP] no it is not . [SEP]
# type_ids: 0 0 0 0 0 0 0 0 1 1 1 1 1 1
# (b) For single sequences:
# tokens: [CLS] the dog is hairy . [SEP]
# type_ids: 0 0 0 0 0 0 0
#
# Where "type_ids" are used to indicate whether this is the first
# sequence or the second sequence. The embedding vectors for `type=0` and
# `type=1` were learned during pre-training and are added to the wordpiece
# embedding vector (and position vector). This is not *strictly* necessary
# since the [SEP] token unambigiously separates the sequences, but it makes
# it easier for the model to learn the concept of sequences.
#
# For classification tasks, the first vector (corresponding to [CLS]) is
# used as as the "sentence vector". Note that this only makes sense because
# the entire model is fine-tuned.
tokens = []
segment_ids = []
tokens.append("[CLS]")
segment_ids.append(0)
for token in tokens_a:
tokens.append(token)
segment_ids.append(0)
tokens.append("[SEP]")
segment_ids.append(0)
assert len(tokens_b) > 0
for token in tokens_b:
tokens.append(token)
segment_ids.append(1)
tokens.append("[SEP]")
segment_ids.append(1)
input_ids = tokenizer.convert_tokens_to_ids(tokens)
# The mask has 1 for real tokens and 0 for padding tokens. Only real
# tokens are attended to.
input_mask = [1] * len(input_ids)
# Zero-pad up to the sequence length.
while len(input_ids) < max_seq_length:
input_ids.append(0)
input_mask.append(0)
segment_ids.append(0)
lm_label_ids.append(-100)
assert len(input_ids) == max_seq_length
assert len(input_mask) == max_seq_length
assert len(segment_ids) == max_seq_length
assert len(lm_label_ids) == max_seq_length
if example.guid < 5:
logger.info("*** Example ***")
logger.info("guid: %s" % (example.guid))
logger.info("tokens: %s" % " ".join(
[str(x) for x in tokens]))
logger.info("input_ids: %s" % " ".join([str(x) for x in input_ids]))
logger.info("input_mask: %s" % " ".join([str(x) for x in input_mask]))
logger.info(
"segment_ids: %s" % " ".join([str(x) for x in segment_ids]))
logger.info("LM label: %s " % (lm_label_ids))
logger.info("Is next sentence label: %s " % (example.is_next))
features = InputFeatures(input_ids=input_ids,
input_mask=input_mask,
segment_ids=segment_ids,
lm_label_ids=lm_label_ids,
is_next=example.is_next)
return features
def main():
parser = argparse.ArgumentParser()
## Required parameters
parser.add_argument("--train_file",
default=None,
type=str,
required=True,
help="The input train corpus.")
parser.add_argument("--test_file",
default=None,
type=str,
required=True,
help="The input test corpus.")
parser.add_argument("--tokenizer_model", default=None, type=str, required=True,
help="tokenizer pre-trained model selected in the list: bert-base-uncased, "
"bert-large-uncased, bert-base-cased, bert-base-multilingual, bert-base-chinese.")
parser.add_argument("--bert_model", default=None, type=str, required=True,
help="Bert pre-trained model selected in the list: bert-base-uncased, "
"bert-large-uncased, bert-base-cased, bert-base-multilingual, bert-base-chinese.")
parser.add_argument("--config_file", default=None, type=str, required=True,
help="Bert pre-trained model selected in the list: bert-base-uncased, "
"bert-large-uncased, bert-base-cased, bert-base-multilingual, bert-base-chinese.")
parser.add_argument("--output_dir",
default=None,
type=str,
required=True,
help="The output directory where the model checkpoints will be written.")
## Other parameters
parser.add_argument("--max_seq_length",
default=128,
type=int,
help="The maximum total input sequence length after WordPiece tokenization. \n"
"Sequences longer than this will be truncated, and sequences shorter \n"
"than this will be padded.")
parser.add_argument("--train_batch_size",
default=32,
type=int,
help="Total batch size for training.")
parser.add_argument("--eval_batch_size",
default=32,
type=int,
help="Total batch size for eval.")
parser.add_argument("--learning_rate",
default=5e-5,
type=float,
help="The initial learning rate for Adam.")
parser.add_argument("--num_train_epochs",
default=4,
type=float,
help="Total number of training epochs to perform.")
parser.add_argument("--adam_epsilon",
default=1e-8,
type=float,
help="Proportion of training to perform linear learning rate warmup for. "
"E.g., 0.1 = 10%% of training.")
parser.add_argument("--no_cuda",
action='store_true',
help="Whether not to use CUDA when available")
parser.add_argument("--on_memory",
action='store_true',
help="Whether to load train samples into memory or use disk")
parser.add_argument("--do_lower_case",
action='store_true',
help="Whether to lower case the input text. True for uncased models, False for cased models.")
parser.add_argument("--local_rank",
type=int,
default=-1,
help="local_rank for distributed training on gpus")
parser.add_argument('--seed',
type=int,
default=42,
help="random seed for initialization")
parser.add_argument('--gradient_accumulation_steps',
type=int,
default=1,
help="Number of updates steps to accumualte before performing a backward/update pass.")
parser.add_argument('--fp16',
action='store_true',
help="Whether to use 16-bit float precision instead of 32-bit")
parser.add_argument('--loss_scale',
type=float, default=0,
help="Loss scaling to improve fp16 numeric stability. Only used when fp16 set to True.\n"
"0 (default value): dynamic loss scaling.\n"
"Positive power of 2: static loss scaling value.\n")
args = parser.parse_args()
if args.local_rank == -1 or args.no_cuda:
device = torch.device("cuda" if torch.cuda.is_available() and not args.no_cuda else "cpu")
n_gpu = torch.cuda.device_count()
else:
torch.cuda.set_device(args.local_rank)
device = torch.device("cuda", args.local_rank)
n_gpu = 1
# Initializes the distributed backend which will take care of sychronizing nodes/GPUs
torch.distributed.init_process_group(backend='nccl')
logger.info("device: {} n_gpu: {}, distributed training: {}, 16-bits training: {}".format(
device, n_gpu, bool(args.local_rank != -1), args.fp16))
if args.gradient_accumulation_steps < 1:
raise ValueError("Invalid gradient_accumulation_steps parameter: {}, should be >= 1".format(
args.gradient_accumulation_steps))
args.train_batch_size = int(args.train_batch_size / args.gradient_accumulation_steps)
random.seed(args.seed)
np.random.seed(args.seed)
torch.manual_seed(args.seed)
if n_gpu > 0:
torch.cuda.manual_seed_all(args.seed)
#if not args.do_train and not args.do_eval:
# raise ValueError("At least one of `do_train` or `do_eval` must be True.")
if os.path.exists(args.output_dir) and os.listdir(args.output_dir):
raise ValueError("Output directory ({}) already exists and is not empty.".format(args.output_dir))
os.makedirs(args.output_dir, exist_ok=True)
# tokenizer = BertTokenizer.from_pretrained(args.bert_model, do_lower_case=args.do_lower_case)
tokenizer = XLNetTokenizer.from_pretrained(args.tokenizer_model)
# train_examples = None
num_train_steps = None
print("Loading Train Dataset", args.train_file)
train_dataset = BERTDataset(args.train_file, tokenizer, seq_len=args.max_seq_length,
corpus_lines=None, on_memory=args.on_memory)
print("Loading eval Dataset", args.test_file)
eval_dataset = BERTDataset(args.test_file, tokenizer, seq_len=args.max_seq_length,
corpus_lines=None, on_memory=args.on_memory)
num_train_steps = int(
len(train_dataset) / args.train_batch_size / args.gradient_accumulation_steps * args.num_train_epochs)
# Prepare model
model = BertForPreTraining.from_pretrained(
args.bert_model,
config=args.config_file,
output_attentions=False, # Whether the model returns attentions weights.
output_hidden_states=False, # Whether the model returns all hidden-states.
)
# Tell pytorch to run this model on the GPU.
model.to(device)
if args.fp16:
model.half()
if args.local_rank != -1:
try:
from apex.parallel import DistributedDataParallel as DDP
except ImportError:
raise ImportError(
"Please install apex from https://www.github.com/nvidia/apex to use distributed and fp16 training.")
model = DDP(model)
elif n_gpu > 1:
model = torch.nn.DataParallel(model)
# Prepare optimizer
'''
param_optimizer = list(model.named_parameters())
no_decay = ['bias', 'LayerNorm.bias', 'LayerNorm.weight']
optimizer_grouped_parameters = [
{'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)], 'weight_decay': 0.01},
{'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}
]
if args.fp16:
try:
from apex.optimizers import FP16_Optimizer
from apex.optimizers import FusedAdam
except ImportError:
raise ImportError(
"Please install apex from https://www.github.com/nvidia/apex to use distributed and fp16 training.")
optimizer = FusedAdam(optimizer_grouped_parameters,
lr=args.learning_rate,
bias_correction=False,
max_grad_norm=1.0)
if args.loss_scale == 0:
optimizer = FP16_Optimizer(optimizer, dynamic_loss_scale=True)
else:
optimizer = FP16_Optimizer(optimizer, static_loss_scale=args.loss_scale)
else:
optimizer = AdamW(optimizer_grouped_parameters,
lr=args.learning_rate,
warmup=args.warmup_proportion,
t_total=num_train_steps)
'''
logger.info("***** Running training *****")
logger.info(" Num examples = %d", len(train_dataset))
logger.info(" Batch size = %d", args.train_batch_size)
logger.info(" Num steps = %d", num_train_steps)
if args.local_rank == -1:
train_sampler = SequentialSampler(train_dataset)
eval_sampler = SequentialSampler(eval_dataset)
else:
# TODO: check if this works with current data generator from disk that relies on file.__next__
# (it doesn't return item back by index)
train_sampler = DistributedSampler(train_dataset)
eval_sampler = DistributedSampler(eval_dataset)
train_dataloader = DataLoader(train_dataset, sampler=train_sampler, batch_size=args.train_batch_size)
eval_dataloader = DataLoader(eval_dataset, sampler=eval_sampler, batch_size=args.train_batch_size)
# optimizer
t_total = len(train_dataloader) // args.train_batch_size
no_decay = ['bias', 'LayerNorm.weight']
optimizer_grouped_parameters = [
{'params': [p for n, p in model.named_parameters() if
not any(nd in n for nd in no_decay)],
'weight_decay': 0.01},
{'params': [p for n, p in model.named_parameters() if any(
nd in n for nd in no_decay)], 'weight_decay': 0.0}
]
optimizer = AdamW(optimizer_grouped_parameters, lr=args.learning_rate, eps=args.adam_epsilon)
scheduler = get_linear_schedule_with_warmup(
optimizer, 0, t_total)
model.train()
tr_loss = 0
global_step = 0
acc = 0
train_loss = 0.0
nb_tr_examples, nb_tr_steps = 0, 0
for _ in trange(int(args.num_train_epochs), desc="Epoch"):
for batch in notebook.tqdm(train_dataloader, desc="Train Iteration"):
batch = tuple(t.to(device) for t in batch)
input_ids, input_mask, segment_ids, lm_label_ids, is_next = batch
outputs = model(input_ids=input_ids, attention_mask=input_mask, token_type_ids=segment_ids,
labels=lm_label_ids, next_sentence_label=is_next)
loss = outputs.loss
'''
if n_gpu > 1:
loss = loss.mean() # mean() to average on multi-gpu.
if args.gradient_accumulation_steps > 1:
loss = loss / args.gradient_accumulation_steps
if args.fp16:
optimizer.backward(outputs.loss)
else:
loss.backward()
'''
print(loss)
loss.backward()
tr_loss += loss.item()
nb_tr_examples += input_ids.size(0)
nb_tr_steps += 1
torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=1)
optimizer.step()
scheduler.step()
model.zero_grad()
global_step += 1
'''
if (step + 1) % args.gradient_accumulation_steps == 0:
# modify learning rate with special warm up BERT uses
lr_this_step = args.learning_rate * warmup_linear(global_step / num_train_steps, args.warmup_proportion)
for param_group in optimizer.param_groups:
param_group['lr'] = lr_this_step
optimizer.step()
scheduler.step()
optimizer.zero_grad()
global_step += 1
'''
train_loss = tr_loss / global_step
perplexity = torch.exp(torch.tensor(train_loss)).item()
print("Training loss {} ".format("{:.3f}".format(train_loss)))
print("Training perplexity {}".format("{:.3f}".format(perplexity)))
logger.info("***** Running evaluation *****")
logger.info(" Num examples = %d", len(eval_dataset))
logger.info(" Batch size = %d", batch_size)
eval_loss = 0.0
acc = 0
nb_eval_steps = 0
for batch in notebook.tqdm(eval_dataloader, desc='Evaluating'):
batch = tuple(t.to(device) for t in batch)
input_ids, input_mask, segment_ids, lm_label_ids, is_next = batch
with torch.no_grad():
outputs = model(input_ids, segment_ids, input_mask, lm_label_ids, is_next)
loss = outputs.loss
eval_loss += loss.mean().item()
nb_eval_steps += 1
eval_loss = eval_loss / nb_eval_steps
perplexity = torch.exp(torch.tensor(eval_loss)).item()
print("Evalution loss {} ".format("{:.3f}".format(eval_loss)))
print("Evalution perplexity {}".format("{:.3f}".format(perplexity)))
if not os.path.exists(output_dir):
os.makedirs(output_dir)
print("Saving model to %s" % args.output_dir)
# Save a trained model, configuration and tokenizer using `save_pretrained()`.
# They can then be reloaded using `from_pretrained()`
model_to_save = model.module if hasattr(model, 'module') else model # Take care of distributed/parallel training
model_to_save.save_pretrained(args.output_dir)
tokenizer.save_pretrained(args.output_dir)
# Save a trained model
#logger.info("** ** * Saving fine - tuned model ** ** * ")
model_to_save = model.module if hasattr(model, 'module') else model # Only save the model it-self
#if args.do_train:
# model_to_save.save_pretrained(self.output_dir)
# tokenizer.save_pretrained(self.output_dir)
def _truncate_seq_pair(tokens_a, tokens_b, max_length):
"""Truncates a sequence pair in place to the maximum length."""
# This is a simple heuristic which will always truncate the longer sequence
# one token at a time. This makes more sense than truncating an equal percent
# of tokens from each, since if one sequence is very short then each token
# that's truncated likely contains more information than a longer sequence.
while True:
total_length = len(tokens_a) + len(tokens_b)
if total_length <= max_length:
break
if len(tokens_a) > len(tokens_b):
tokens_a.pop()
else:
tokens_b.pop()
if __name__ == "__main__":
main()
`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9666/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9666/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9665 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9665/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9665/comments | https://api.github.com/repos/huggingface/transformers/issues/9665/events | https://github.com/huggingface/transformers/issues/9665 | 788,559,996 | MDU6SXNzdWU3ODg1NTk5OTY= | 9,665 | IndexError: index out of bounds when running run_mlm.py | {
"login": "miguelwon",
"id": 7373193,
"node_id": "MDQ6VXNlcjczNzMxOTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7373193?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/miguelwon",
"html_url": "https://github.com/miguelwon",
"followers_url": "https://api.github.com/users/miguelwon/followers",
"following_url": "https://api.github.com/users/miguelwon/following{/other_user}",
"gists_url": "https://api.github.com/users/miguelwon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/miguelwon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/miguelwon/subscriptions",
"organizations_url": "https://api.github.com/users/miguelwon/orgs",
"repos_url": "https://api.github.com/users/miguelwon/repos",
"events_url": "https://api.github.com/users/miguelwon/events{/privacy}",
"received_events_url": "https://api.github.com/users/miguelwon/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@sgugger ",
"It's very hard to help you without being able to reproduce the bug. Could you share a small version of your csv file that reproduces it?",
"Yes, no problem. I just tried with a sample created from the `head`of my `full_corpus.csv` file and got the same error. This is the head:\r\n\r\n```\r\nA tomada de posse já está marcada para esta quarta feira ao fim da tarde...\r\nLobo Xavier está infetado com Covid-19. Esteve no Conselho de Estado na terça-feira.\r\n\"Porque está descida é temporária. Se descessem agora, depois não poderiam explicar a necessidade de uma nova subida.\"\r\nEm acumulação com o Banco de Portugal.\r\n\"EUA: Há muitas maneiras de isto acabar mal. A newsletter Novo Normal do no ECO. Um guia do que pode suceder nas eleições americanas (sentem-se, é melhor)\"\r\nCosta vai substituir presidente do Tribunal de Contas via\r\nComo criar filhos felizes?\r\nUma economia a 90 por cento via\r\nApoio à Retoma Progressiva vai permitir suspender contratos via Falta saber qual o valor do salário e quem o paga.\r\nO perigo de esperar que o Estado nos salve\r\n```",
"The problem is that you are not passing a `max_seq_length` so the script uses the tokenizer `model_lax_length`, which is in turn excessively large (1000000000000000019884624838656). So this results in all your texts not even being able to produce one batch.\r\n\r\nJust pass `--max_seq_length 512` or something else and you should be good.",
"Ok, thanks. It's working now."
] | 1,611 | 1,611 | 1,611 | NONE | null |
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.0.dev0
- Platform: Linux-4.15.0-46-generic-x86_64-with-Ubuntu-16.04-xenial
- Python version: 3.6.7
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): 2.4.0 (False)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
?
## Information
Model I am using (Bert, XLNet ...): neuralmind/bert-base-portuguese-cased
## To reproduce
Steps to reproduce the behavior:
I want to fine-tune a pretrained language model using [run_mlm.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). I have a corpus file (ful_corpus.csv) that contains one doc (raw text) per line. When I run the following command:
`python run_mlm.py --model_name_or_path "neuralmind/bert-base-portuguese-cased" --train_file ../data/full_corpus.csv --cache_dir /home/mwon/data-mwon/paperChega/src_classificador/data/hugingface --output models/ --do_train`
it results in the error:
```
Traceback (most recent call last):
File "run_mlm.py", line 449, in <module>
main()
File "run_mlm.py", line 384, in main
load_from_cache_file=not data_args.overwrite_cache,
File "/mnt/sdb/data-mwon/paperChega/env2/lib/python3.6/site-packages/datasets/dataset_dict.py", line 303, in map
for k, dataset in self.items()
File "/mnt/sdb/data-mwon/paperChega/env2/lib/python3.6/site-packages/datasets/dataset_dict.py", line 303, in <dictcomp>
for k, dataset in self.items()
File "/mnt/sdb/data-mwon/paperChega/env2/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1260, in map
update_data=update_data,
File "/mnt/sdb/data-mwon/paperChega/env2/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 157, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/mnt/sdb/data-mwon/paperChega/env2/lib/python3.6/site-packages/datasets/fingerprint.py", line 163, in wrapper
out = func(self, *args, **kwargs)
File "/mnt/sdb/data-mwon/paperChega/env2/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1529, in _map_single
writer.write_batch(batch)
File "/mnt/sdb/data-mwon/paperChega/env2/lib/python3.6/site-packages/datasets/arrow_writer.py", line 278, in write_batch
pa_table = pa.Table.from_pydict(typed_sequence_examples)
File "pyarrow/table.pxi", line 1474, in pyarrow.lib.Table.from_pydict
File "pyarrow/array.pxi", line 322, in pyarrow.lib.asarray
File "pyarrow/array.pxi", line 222, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/mnt/sdb/data-mwon/paperChega/env2/lib/python3.6/site-packages/datasets/arrow_writer.py", line 100, in __arrow_array__
if trying_type and out[0].as_py() != self.data[0]:
File "pyarrow/array.pxi", line 1058, in pyarrow.lib.Array.__getitem__
File "pyarrow/array.pxi", line 540, in pyarrow.lib._normalize_index
IndexError: index out of bounds
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9665/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9665/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9664 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9664/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9664/comments | https://api.github.com/repos/huggingface/transformers/issues/9664/events | https://github.com/huggingface/transformers/pull/9664 | 788,541,440 | MDExOlB1bGxSZXF1ZXN0NTU3MDMzODg5 | 9,664 | Missing `return_dict` in Doc example | {
"login": "arnaudsm",
"id": 3920793,
"node_id": "MDQ6VXNlcjM5MjA3OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3920793?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arnaudsm",
"html_url": "https://github.com/arnaudsm",
"followers_url": "https://api.github.com/users/arnaudsm/followers",
"following_url": "https://api.github.com/users/arnaudsm/following{/other_user}",
"gists_url": "https://api.github.com/users/arnaudsm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arnaudsm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arnaudsm/subscriptions",
"organizations_url": "https://api.github.com/users/arnaudsm/orgs",
"repos_url": "https://api.github.com/users/arnaudsm/repos",
"events_url": "https://api.github.com/users/arnaudsm/events{/privacy}",
"received_events_url": "https://api.github.com/users/arnaudsm/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! Are you sure you checked in v4.2.1? I just checked on both `master` and v4.2.1 and the code executes (as it should!).\r\n\r\nThe `return_dict` was set to be `True` by default in v4.0.0.",
"My bad, I was in the wrong pyenv. Closing the PR."
] | 1,611 | 1,611 | 1,611 | NONE | null | # What does this PR do?
Fixes a crash in [Summary of the tasks](https://huggingface.co/transformers/task_summary.html) documentation, by adding `return_dict=True` to the model() function, as we need `start_logits` and `end_logits` afterwards.
## Fixes
[Issue 9043](https://github.com/huggingface/transformers/issues/9043) (reproduced in 4.2.1)
## Who can review?
@LysandreJik @sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9664/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9664/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9664",
"html_url": "https://github.com/huggingface/transformers/pull/9664",
"diff_url": "https://github.com/huggingface/transformers/pull/9664.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9664.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9663 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9663/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9663/comments | https://api.github.com/repos/huggingface/transformers/issues/9663/events | https://github.com/huggingface/transformers/pull/9663 | 788,536,778 | MDExOlB1bGxSZXF1ZXN0NTU3MDMwMTA5 | 9,663 | Fix DPRReaderTokenizer's attention_mask | {
"login": "mkserge",
"id": 2992022,
"node_id": "MDQ6VXNlcjI5OTIwMjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2992022?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mkserge",
"html_url": "https://github.com/mkserge",
"followers_url": "https://api.github.com/users/mkserge/followers",
"following_url": "https://api.github.com/users/mkserge/following{/other_user}",
"gists_url": "https://api.github.com/users/mkserge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mkserge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mkserge/subscriptions",
"organizations_url": "https://api.github.com/users/mkserge/orgs",
"repos_url": "https://api.github.com/users/mkserge/repos",
"events_url": "https://api.github.com/users/mkserge/events{/privacy}",
"received_events_url": "https://api.github.com/users/mkserge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thank you both. Should I close the corresponding issue?",
"Just did! Thanks!"
] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | # What does this PR do?
This PR fixes an issue with the `attention_mask` not being properly generated by the DPRReaderTokenizer. Please see issue #9555 for more details.
I added an integration test that checks the DPRReader following a similar example in the test file.
I have some test failures due to
```
"AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess'"
```
and
```
"AttributeError: module 'wandb' has no attribute 'ensure_configured'"
```
which seem to be unrelated to my code changes.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@LysandreJik and @lhoestq would probably be best positioned to review.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9663/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9663/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9663",
"html_url": "https://github.com/huggingface/transformers/pull/9663",
"diff_url": "https://github.com/huggingface/transformers/pull/9663.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9663.patch",
"merged_at": 1611052992000
} |
https://api.github.com/repos/huggingface/transformers/issues/9662 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9662/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9662/comments | https://api.github.com/repos/huggingface/transformers/issues/9662/events | https://github.com/huggingface/transformers/pull/9662 | 788,380,819 | MDExOlB1bGxSZXF1ZXN0NTU2OTAyMzUx | 9,662 | Fix TFTrainer prediction output | {
"login": "janinaj",
"id": 4103541,
"node_id": "MDQ6VXNlcjQxMDM1NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/4103541?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/janinaj",
"html_url": "https://github.com/janinaj",
"followers_url": "https://api.github.com/users/janinaj/followers",
"following_url": "https://api.github.com/users/janinaj/following{/other_user}",
"gists_url": "https://api.github.com/users/janinaj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/janinaj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/janinaj/subscriptions",
"organizations_url": "https://api.github.com/users/janinaj/orgs",
"repos_url": "https://api.github.com/users/janinaj/repos",
"events_url": "https://api.github.com/users/janinaj/events{/privacy}",
"received_events_url": "https://api.github.com/users/janinaj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Sorry for the confusion. Here is a detailed description of the first issue:\r\n\r\nIf the number of examples is divisible by eval_batch_size, the first batch is predicted twice. Say we have the following examples (n=8): A, B, C, D, E, F, G, H and eval_batch_size = 4. This will create 2 batches: (A,B,C,D) and (E,F,G,H). The current implementation of prediction_loop() will run batch (A,B,C,D), (E,F,G,H) and (A,B,C,D) again. This produces an output with a shape of 12 instead of 8.\r\n\r\nThis causes eval_loss to be computed as (loss(A,B,C,D) + loss(E,F,G,H) + loss(A,B,C,D)) / 2. Other metrics are also computed on A, B, C, D, E, F, G, H, A, B, C, D instead of only on A, B, C, D, E, F, G, H.\r\n\r\nIf the number of examples is not divisible by eval_batch_size, the first batch and part of the second batch are predicted twice. Say we have examples (n=8): A, B, C, D, E, F, G, H and eval_batch_size = 5. The current implementation of prediction_loop() will run batch (A,B,C,D,E), (F,G,H,A,B) and (C,D,E,F,G). This produces an output with a shape of 15 instead of 8. Again, the eval_loss and other metrics are computed incorrectly.\r\n\r\n[This](https://colab.research.google.com/drive/1JH-269TcWWzowDngCmtpEwnmsFbVtL7K) is an example on an actual dataset (code is taken from the run_tf_glue.py example). Notice the assertion error when comparing the number of predicted results to the number of examples in the dataset.\r\n\r\nI did realize that I made a complicated solution to the problem. The main change needed was to not call repeat(). I have committed the simpler solution.",
"Ok, from what I understand of the problem, what you have done is still not acceptable sorry, the build of a dataset must stay as it is because, `repeat` is very important and is mandatory when training on TPU, and `drop_remainder` must stay accessible through the training argument `dataloader_drop_last`, what if someone want it to be `False`?\r\n\r\nIf the problem is in the `prediction_loop` and goes one step to far, you can just stop the loop one step before by replacing:\r\n```\r\nif step == steps:\r\n break\r\n```\r\nby\r\n```\r\nif step == steps: - 1:\r\n break\r\n```\r\n\r\nThis should work.",
"` if step == steps: - 1` only fixes the problem when `num_examples % batch_size`.\r\n\r\nI am sorry if I missed this, but it seems like `prediction_loop` is only called during evaluation/prediction. I am not quite sure how it affects the training process. I did not change `get_train_tfdataset()`; `repeat()` is still in it. At the very least, `repeat()` should be removed in `get_test_tfdataset()` (i.e.during prediction), although I argue that it should be removed in \r\n`get_test_tfdataset()` too because the evaluation loss being reported is incorrect.\r\n",
"After giving a deeper look at the issue, I can see three things to fix:\r\n\r\n1. Replace `if step == steps:` by `if step == steps: - 1:` line 348\r\n2. Replace `metrics[\"eval_loss\"] = self.eval_loss.result().numpy() / steps` by `metrics[\"eval_loss\"] = self.eval_loss.result().numpy() / (steps - 1)` line 356\r\n3. Move `self.eval_loss = tf.keras.metrics.Sum()` from the `prediction_loop` method inside the `__init__` method.\r\n\r\nAfter having done those changes, run a training with the `--dataloader_drop_last` argument. Now you should not see the `0.0` loss value anymore.\r\n\r\nThe argument `--dataloader_drop_last` removes the last batch of the dataset. In order to be sure of that you can know the real size of your dataset by doing `(dataset.cardinality().numpy()// eval_batch_size) * eval_batch_size`. In the case of the MRPC dataset, `dataset.cardinality().numpy() == 408`, while the effective number of examples on which you will evaluate your model is `400`.",
"Thank you for fixing the 0.0 loss value!\r\n\r\nUnfortunately, I think the first issue I mentioned still persists even with your first two fixes. If my math is correct, with your current solution, you use an `eval_batch_size` of 10 when evaluating the MRPC dataset, your model will be evaluated on 410 examples (the loss from the first two examples will be added twice to `eval_loss `). One way to check this is to log/print the shape of `preds`.\r\n\r\nI am sorry if my explanations are confusing. I think my main point is that if I want to evaluate/predict on X examples, I expect it to evaluate/predict on exactly X examples, i.e. `preds.shape = (X, n_tags)` . I actually found this issue when I was using my trained model to predict 929 examples. I was getting 944 predictions, i.e. `predictions.shape = (944, n_tags)`. With your solution, I will be getting 936 examples.\r\n\r\nOn a related note, may I ask if there are unit tests for TFTrainer and if so, where these are located? I was only able to locate the Trainer tests.",
"Also, it seems like `drop_remainder` does not have an effect if you are using `repeat()`.\r\n\r\n```python\r\nimport tensorflow as tf\r\n\r\ndummy_data = tf.data.Dataset.range(8)\r\n```\r\n\r\n```python\r\ntest_batches = (\r\n dummy_data.repeat()\r\n .batch(3, drop_remainder=True)\r\n .prefetch(tf.data.experimental.AUTOTUNE)\r\n )\r\n\r\nsteps = 5\r\nfor step, batch in enumerate(test_batches):\r\n print(batch)\r\n if step == steps:\r\n break\r\n```\r\n\r\n```python\r\ntest_batches_nodrop = (\r\n dummy_data.repeat()\r\n .batch(3, drop_remainder=False)\r\n .prefetch(tf.data.experimental.AUTOTUNE)\r\n )\r\n\r\nprint('Batches when drop_remainder=False')\r\nsteps = 5\r\nfor step, batch in enumerate(test_batches_nodrop):\r\n print(batch)\r\n if step == steps:\r\n break\r\n\r\n```\r\n\r\nBoth code blocks produce the same output:\r\n\r\n```python\r\ntf.Tensor([0 1 2], shape=(3,), dtype=int64)\r\ntf.Tensor([3 4 5], shape=(3,), dtype=int64)\r\ntf.Tensor([6 7 0], shape=(3,), dtype=int64)\r\ntf.Tensor([1 2 3], shape=(3,), dtype=int64)\r\ntf.Tensor([4 5 6], shape=(3,), dtype=int64)\r\ntf.Tensor([7 0 1], shape=(3,), dtype=int64)\r\n```\r\n\r\nIt does have effect on the behavior of `prediction_loop()` because it changes the `steps` to `steps-1`, but that is due to this line:\r\n\r\n`approx = math.floor if self.args.dataloader_drop_last else math.ceil`",
"> I am sorry if my explanations are confusing. I think my main point is that if I want to evaluate/predict on X examples, I expect it to evaluate/predict on exactly X examples, i.e. preds.shape = (X, n_tags) .\r\n\r\nYes, but for this you have to use a compliant batch size, the requirements are to drop the last batch if its size is lower than the required batch size. This is the wanted and expected behavior. See my explanation on MRPC, with a batch size of 16 only 400 examples over 408 will be evaluated. If you want to evaluate over all the examples, you can use a batch size of 8.\r\n\r\n> Also, it seems like drop_remainder does not have an effect if you are using repeat().\r\n\r\nYes, this is normal, as detailed in the documentation. This is one of the reason why we use the approx variable.\r\n",
"I see. Thank you for the explanation, and I am sorry for overlooking the comment about `drop_remainder` not having an effect. In this case, may I ask if it is ok to log the actual number of examples the model is evaluated on, e.g. adding something like: \"Number of examples used for evaluation = 400\"?\r\n\r\nDoes `predict()` require a similar behavior, e.g. is `repeat()` required in `get_test_tfdataset()`? Unlike `evaluate()` it is never used in training. If I set `dataloader_drop_last=True` during training then perform prediction on unlabeled examples after, only 928 of my 929 examples are given a prediction.",
"> I see. Thank you for the explanation, and I am sorry for overlooking the comment about drop_remainder not having an effect.\r\n\r\nNo worries, that's ok :)\r\n\r\n> In this case, may I ask if it is ok to log the actual number of examples the model is evaluated on, e.g. adding something like: \"Number of examples used for evaluation = 400\"?\r\n\r\nSure! That would be a good idea!!\r\n\r\n> Does predict() require a similar behavior, e.g. is repeat() required in get_test_tfdataset()? Unlike evaluate() it is never used in training. If I set dataloader_drop_last=True during training then perform prediction on unlabeled examples after, only 928 of my 929 examples are given a prediction.\r\n\r\nI agree, no need to use `repeat` for predict that uses the test dataset 👍 ",
"I have pushed the discussed changes. A couple of final things:\r\n\r\n> 2. Replace `metrics[\"eval_loss\"] = self.eval_loss.result().numpy() / steps` by `metrics[\"eval_loss\"] = self.eval_loss.result().numpy() / (steps - 1)` line 356\r\n\r\nThis does not need to be replaced. Since the loop terminates when `step== steps - 1`, it actually runs for n=steps times.\r\n\r\n> 3. Move `self.eval_loss = tf.keras.metrics.Sum()` from the `prediction_loop` method inside the `__init__` method.\r\n\r\nThis works but I had to call `eval_loss.reset_states()` inside `prediction_loop()` so the sum from the previous calculations is not added.\r\n"
] | 1,610 | 1,611 | 1,611 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
This PR fixes two issues:
1) The prediction output (specifically prediction_loop) of TFTrainer does not match the dataset cardinality. If the number of examples is divisible by eval_batch_size, the first batch is predicted twice. Else, the first n examples, where n = eval_batch_size - num_examples % eval_batch_size, are predicted twice.
This results in an output shape that is different from dataset cardinality. This also causes the output of evaluate(), including eval_loss, to be incorrect (e.g. loss is computed twice for the first few examples).
2) The evaluation loss only works the first time it is computed. Subsequent computations result to 0. Below is a sample output when an evaluation strategy is set during training:
[INFO|trainer_tf.py:398] 2021-01-18 01:34:58,856 >> {**'eval_loss': 0.6290212145038679**, 'eval_acc': 0.6875, ... 'step': 10}
[INFO|trainer_tf.py:398] 2021-01-18 01:35:10,852 >> {**'eval_loss': 0.0**, 'eval_acc': 0.6875, ..., 'step': 20}
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
tensorflow: @jplu
Trainer: @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9662/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9662/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9662",
"html_url": "https://github.com/huggingface/transformers/pull/9662",
"diff_url": "https://github.com/huggingface/transformers/pull/9662.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9662.patch",
"merged_at": 1611566833000
} |
https://api.github.com/repos/huggingface/transformers/issues/9661 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9661/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9661/comments | https://api.github.com/repos/huggingface/transformers/issues/9661/events | https://github.com/huggingface/transformers/pull/9661 | 788,329,556 | MDExOlB1bGxSZXF1ZXN0NTU2ODU5NzYw | 9,661 | Fix TF Flaubert and XLM | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I'm checking what is going wrong as the tests of equivalence should not fail.",
"Ok, now it works 👍 "
] | 1,610 | 1,611 | 1,611 | CONTRIBUTOR | null | # What does this PR do?
By doing some experiments on Flaubert and XLM I realized that building a model with a `None` argument forces this value to be `None` when served. Then to fix this issue, the build takes a proper input without a `None` value.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9661/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9661/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9661",
"html_url": "https://github.com/huggingface/transformers/pull/9661",
"diff_url": "https://github.com/huggingface/transformers/pull/9661.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9661.patch",
"merged_at": 1611075778000
} |
https://api.github.com/repos/huggingface/transformers/issues/9660 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9660/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9660/comments | https://api.github.com/repos/huggingface/transformers/issues/9660/events | https://github.com/huggingface/transformers/issues/9660 | 788,319,662 | MDU6SXNzdWU3ODgzMTk2NjI= | 9,660 | run_ner.py crashes when dev or test contain previously unseen labels | {
"login": "AleksandrsBerdicevskis",
"id": 19609502,
"node_id": "MDQ6VXNlcjE5NjA5NTAy",
"avatar_url": "https://avatars.githubusercontent.com/u/19609502?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AleksandrsBerdicevskis",
"html_url": "https://github.com/AleksandrsBerdicevskis",
"followers_url": "https://api.github.com/users/AleksandrsBerdicevskis/followers",
"following_url": "https://api.github.com/users/AleksandrsBerdicevskis/following{/other_user}",
"gists_url": "https://api.github.com/users/AleksandrsBerdicevskis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AleksandrsBerdicevskis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AleksandrsBerdicevskis/subscriptions",
"organizations_url": "https://api.github.com/users/AleksandrsBerdicevskis/orgs",
"repos_url": "https://api.github.com/users/AleksandrsBerdicevskis/repos",
"events_url": "https://api.github.com/users/AleksandrsBerdicevskis/events{/privacy}",
"received_events_url": "https://api.github.com/users/AleksandrsBerdicevskis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,610 | 1,619 | 1,619 | NONE | null | ## Environment info
- `transformers` version: 4.2.0dev0
- Platform: Linux-3.10.0-1127.19.1.el7.x86_64-x86_64-with-centos-7.8.2003-Core
- Python version: 3.6.8
- PyTorch version (GPU?): 1.7.1+cu110 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: don't know
### Who can help
@stefan-it
## Information
Model I am using (Bert, XLNet ...): KB/bert-base-swedish-cased
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
I am using run_ner.py to POS-tag a Swedish corpus (TalbankenSBX). If dev and/or test files contain a label that is not present in the train file, the script crashes. The same issue arises with any other corpus I try.
## To reproduce
Steps to reproduce the behavior:
1. Create two toy files with the following contents:
sbx-1-train.json:
```
{"words": ["Vem", "får", "rösta", "?"], "pos": ["HP.UTR.SIN.IND", "VB.PRS.AKT", "VB.INF.AKT", "MAD"]}
```
sbx-1-dev.json
```
{"words": ["är", "född", "1950", "eller", "tidigare", ","], "pos": ["VB.PRS.AKT", "PC.PRF.UTR.SIN.IND.NOM", "RG.NOM", "KN", "AB.KOM", "MID"]}
```
2. Run python run_ner.py --model_name_or_path KB/bert-base-swedish-cased --train_file sbx-1-train.json --validation_file sbx-1-dev.json --output_dir sbx1 --do_train --do_eval
This results in:
```
Traceback (most recent call last):
File "run_ner.py", line 412, in <module>
main()
File "run_ner.py", line 303, in main
load_from_cache_file=not data_args.overwrite_cache,
File "/home/sasha/venvs/hugtrans/lib64/python3.6/site-packages/datasets/dataset_dict.py", line 303, in map
for k, dataset in self.items()
File "/home/sasha/venvs/hugtrans/lib64/python3.6/site-packages/datasets/dataset_dict.py", line 303, in <dictcomp>
for k, dataset in self.items()
File "/home/sasha/venvs/hugtrans/lib64/python3.6/site-packages/datasets/arrow_dataset.py", line 1240, in map
update_data = does_function_return_dict(test_inputs, test_indices)
File "/home/sasha/venvs/hugtrans/lib64/python3.6/site-packages/datasets/arrow_dataset.py", line 1211, in does_function_return_dict
function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
File "run_ner.py", line 288, in tokenize_and_align_labels
label_ids.append(label_to_id[label[word_idx]])
KeyError: 'PC.PRF.UTR.SIN.IND.NOM'
```
...where PC.PRF.UTR.SIN.IND.NOM is the tag which is not present in the train set.
## Expected behavior
The script should not crash when encountering unseen tags. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9660/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9660/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9659 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9659/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9659/comments | https://api.github.com/repos/huggingface/transformers/issues/9659/events | https://github.com/huggingface/transformers/pull/9659 | 788,270,999 | MDExOlB1bGxSZXF1ZXN0NTU2ODEwOTQ3 | 9,659 | Wav2Vec2 | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2669577093,
"node_id": "MDU6TGFiZWwyNjY5NTc3MDkz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/PR%20for%20Model%20Addition",
"name": "PR for Model Addition",
"color": "5319e7",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"hi @patrickvonplaten thank you for creating this PR and converting some model from original to test it\r\n\r\ni want to test by convert XLSR [model](https://github.com/pytorch/fairseq/tree/master/examples/wav2vec ) using your `convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py`\r\n\r\nby following command `python convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py --checkpoint_path /content/xlsr_53_56k.pt --pytorch_dump_folder_path huggingface_model`\r\n\r\ni got error \r\n\r\n```\r\nFile \"convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py\", line 147, in convert_wav2vec2_checkpoint\r\n [checkpoint_path], arg_overrides={\"data\": dict_path}\r\n File \"/usr/local/lib/python3.6/dist-packages/fairseq/checkpoint_utils.py\", line 279, in load_model_ensemble_and_task\r\n state = load_checkpoint_to_cpu(filename, arg_overrides)\r\n File \"/usr/local/lib/python3.6/dist-packages/fairseq/checkpoint_utils.py\", line 231, in load_checkpoint_to_cpu\r\n setattr(args, arg_name, arg_val)\r\nAttributeError: 'NoneType' object has no attribute 'data'\r\n```\r\n\r\ndo i need to specify the --dict_path argument...if so ..where can i get them ? thanks",
"[this issue](https://github.com/pytorch/fairseq/issues/3050) seems fix my previous problem but the new error comes up\r\n\r\n```\r\nFeat extract conv layer 0 was initialized from feature_extractor.conv_layers.0.0.weight.\r\nTraceback (most recent call last):\r\n File \"convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py\", line 162, in <module>\r\n convert_wav2vec2_checkpoint(args.checkpoint_path, args.pytorch_dump_folder_path, args.dict_path)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/autograd/grad_mode.py\", line 26, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py\", line 151, in convert_wav2vec2_checkpoint\r\n recursively_load_weights(model, hf_wav2vec)\r\n File \"convert.py\", line 77, in recursively_load_weights\r\n hf_model.config.feat_extract_norm == \"group\",\r\n File \"convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py\", line 112, in load_conv_layer\r\n value.shape == feature_extractor.conv_layers[layer_id].conv.bias.data.shape\r\nAttributeError: 'NoneType' object has no attribute 'data'\r\n```",
"> ```\r\n> AttributeError: 'NoneType' object has no attribute 'data'\r\n> ```\r\n\r\nHey @acul3, \r\n\r\nthanks for trying out the `XLSR` model! I haven't added it to the conversion script yet - will do it this week!",
"thank you for the great idea and work to merge wav2vec2 to transformers. I am wondering:\r\n1. how to use a transformer LM for decoding as Fairseq uses wav2letter's decoder for better accuracy. \r\n2. it seems to be much convenient if the output has a confidence score too",
"> thank you for the great idea and work to merge wav2vec2 to transformers. I am wondering:\r\n> \r\n> 1. how to use a transformer LM for decoding as Fairseq uses wav2letter's decoder for better accuracy.\r\n> 2. it seems to be much convenient if the output has a confidence score too\r\n\r\n1. I'm also still working on figuring out the best way to do this!\r\n2. Yeah that will be a nice-to-have, but it will require some time to be added.",
"@patrickvonplaten Another great PR! \r\nI am wondering whether this current implementation supports self-supervise training of user's custom dataset ?\r\n",
"> supervise\r\n\r\nNot yet :-) Working on it right now!",
"Inspiring feat Patrick! I remember this model was like a puzzle for me the first time I tried to make it work. You've made it incredibly easy to use. Can't wait for the decoders and finetuning",
"\r\n> > ```\r\n> > AttributeError: 'NoneType' object has no attribute 'data'\r\n> > ```\r\n> \r\n> Hey @acul3,\r\n> \r\n> thanks for trying out the `XLSR` model! I haven't added it to the conversion script yet - will do it this week!\r\n\r\nhi @patrickvonplaten is this available yet? thank you",
"@patrickvonplaten \r\nStrange bug, if I use the self trained lv60 960h version on CPU the results are very good\r\nUsing it on cuda the results are pretty strange\r\nI am using the code provided on model card",
"> > ```\r\n> > AttributeError: 'NoneType' object has no attribute 'data'\r\n> > ```\r\n> \r\n> \r\n> Hey @acul3,\r\n> \r\n> \r\n> thanks for trying out the `XLSR` model! I haven't added it to the conversion script yet - will do it this week!\r\n\r\nI am trying to convert `XLSR` models too, by modifying the config below, it seems that I used all weights as wav2vec_small do\r\n```\r\n{\r\n \"activation_dropout\": 0.0,\r\n \"apply_spec_augment\": true,\r\n \"architectures\": [\r\n \"Wav2Vec2Model\"\r\n ],\r\n \"attention_dropout\": 0.0,\r\n \"bos_token_id\": 1,\r\n \"conv_bias\": true,\r\n \"conv_dim\": [\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512\r\n ],\r\n \"conv_kernel\": [\r\n 10,\r\n 3,\r\n 3,\r\n 3,\r\n 3,\r\n 2,\r\n 2\r\n ],\r\n \"conv_stride\": [\r\n 5,\r\n 2,\r\n 2,\r\n 2,\r\n 2,\r\n 2,\r\n 2\r\n ],\r\n \"do_stable_layer_norm\": false,\r\n \"eos_token_id\": 2,\r\n \"feat_extract_activation\": \"gelu\",\r\n \"feat_extract_norm\": \"layer\",\r\n \"feat_proj_dropout\": 0.1,\r\n \"final_dropout\": 0.0,\r\n \"freeze_feat_extract_train\": true,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout\": 0.1,\r\n \"hidden_size\": 1024,\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 4096,\r\n \"gradient_checkpointing\": true,\r\n \"layer_norm_eps\": 1e-05,\r\n \"layerdrop\": 0.0,\r\n \"mask_channel_length\": 10,\r\n \"mask_channel_min_space\": 1,\r\n \"mask_channel_other\": 0.0,\r\n \"mask_channel_prob\": 0.0,\r\n \"mask_channel_selection\": \"static\",\r\n \"mask_time_length\": 10,\r\n \"mask_time_min_space\": 1,\r\n \"mask_time_other\": 0.0,\r\n \"mask_time_prob\": 0.05,\r\n \"mask_time_selection\": \"static\",\r\n \"model_type\": \"wav2vec2\",\r\n \"no_mask_channel_overlap\": false,\r\n \"no_mask_time_overlap\": false,\r\n \"num_attention_heads\": 16,\r\n \"num_conv_pos_embedding_groups\": 16,\r\n \"num_conv_pos_embeddings\": 128,\r\n \"num_feat_extract_layers\": 7,\r\n \"num_hidden_layers\": 24,\r\n \"pad_token_id\": 0\r\n}\r\n```\r\nAfter converting:\r\n> Unused weights ['quantizer.vars', 'quantizer.weight_proj.weight', 'quantizer.weight_proj.bias', 'project_q.weight', 'project_q.bias', 'layer_norm.weight', 'layer_norm.bias', 'final_proj.weight', 'final_proj.bias']\r\n\r\nNevertheless, The logits are the same on testing, seems that I still left something not converted? Do you have any ideas what's going on ? \r\n```\r\ntokenizer = Wav2Vec2Tokenizer.from_pretrained(\"facebook/wav2vec2-base\")\r\nmodel = Wav2Vec2Model.from_pretrained(\"./xlsr\")\r\n\r\ndef map_to_array(batch):\r\n speech, _ = sf.read(batch[\"file\"])\r\n batch[\"speech\"] = speech\r\n return batch\r\n\r\nds = load_dataset(\"patrickvonplaten/librispeech_asr_dummy\", \"clean\", split=\"validation\")\r\nds = ds.map(map_to_array)\r\n\r\ninput_values = tokenizer(ds[\"speech\"][0], return_tensors=\"pt\").input_values # Batch size 1\r\nlogits = model(input_values).last_hidden_state\r\n```\r\n\r\nLogits:\r\n```\r\ntensor([[[ 0.2285, 0.6376, 0.0164, ..., 0.0275, 0.1215, -0.2684],\r\n [ 0.2285, 0.6376, 0.0164, ..., 0.0275, 0.1215, -0.2684],\r\n [ 0.2285, 0.6376, 0.0164, ..., 0.0275, 0.1215, -0.2684],\r\n ...,\r\n [ 0.2285, 0.6376, 0.0164, ..., 0.0275, 0.1215, -0.2684],\r\n [ 0.2285, 0.6376, 0.0164, ..., 0.0275, 0.1215, -0.2684],\r\n [ 0.2285, 0.6376, 0.0164, ..., 0.0275, 0.1215, -0.2684]]],\r\n grad_fn=<NativeLayerNormBackward>)\r\n```\r\n"
] | 1,610 | 1,614 | 1,612 | MEMBER | null | # What does this PR do?
Adds Wav2Vec2 from https://github.com/pytorch/fairseq/blob/master/examples/wav2vec/README.md
This PR adds the wav2vec2 Acoustic model to Transformers. The model is different from "conventional" transformer
models since it classifies a raw waveform input (float array) into logits. Therefore the `Wav2Vec2Tokenizer` behaves quite differently from usual tokenizers in that it only pads an input instead of encoding it to token ids.
The fully functional model should be added in three steps:
1) Add the Acoustic model ready to be used for inference (This PR)
2) Add fine-tuning + pertaining functionality to the model (Next PR)
3) Add an example script showing how Wav2Vec2 can be used with a language model
4) Add an Encoder/Decoder version of Wav2Vec2.
# Usage
One can play around with a quick demo here: https://colab.research.google.com/drive/1xaVKQ739-ue0v8IuMZoMzOFc4-NlGQyd?usp=sharing and some usage examples as described on the model cards:
https://huggingface.co/models?filter=wav2vec2
# Review
In this PR, no training functionality is added to the model. Because this is quite complex for Wav2Vec2 this will be done in a follow up PR.
It would be great, if we can however already merge this first part which allows to use Wav2Vec2 for inference.
The tokenizer is quite different, so it would be great if @thomwolf @n1t0 can also take a look here.
Since the model expects the raw waveform signal as an input, the name `input_ids` is changed to `input_values` standing for "a tensor of float values" - would be great if you can check this @LysandreJik @sgugger @thomwolf.
# Done:
- [x] load pretrained weight into model
- [x] make sure forward pass yields equal outputs
- [x] successful transcription
- [x] add tokenizer
- [x] Think about how to add the two different architectures `Wav2Vec 2.0 Large (LV-60)`/`Wav2Vec 2.0 Large (LV-60) + Self Training` is different from `Wav2Vec 2.0 Large`/`Wav2Vec 2.0 Base` (layer_norm is switched and no group norm is used)
- [x] add model tests
- [x] add tokenizer tests
- [x] add docstring
- [x] clean config
# Future TODO:
- [ ] Add PreTraining & Fine-Tuning to model
- [ ] Add Encoder Decoder model / CTC decoding
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9659/reactions",
"total_count": 19,
"+1": 14,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 5,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9659/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9659",
"html_url": "https://github.com/huggingface/transformers/pull/9659",
"diff_url": "https://github.com/huggingface/transformers/pull/9659.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9659.patch",
"merged_at": 1612270331000
} |
https://api.github.com/repos/huggingface/transformers/issues/9658 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9658/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9658/comments | https://api.github.com/repos/huggingface/transformers/issues/9658/events | https://github.com/huggingface/transformers/issues/9658 | 788,243,617 | MDU6SXNzdWU3ODgyNDM2MTc= | 9,658 | Tokenizstion | {
"login": "Syavaprd",
"id": 38497601,
"node_id": "MDQ6VXNlcjM4NDk3NjAx",
"avatar_url": "https://avatars.githubusercontent.com/u/38497601?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Syavaprd",
"html_url": "https://github.com/Syavaprd",
"followers_url": "https://api.github.com/users/Syavaprd/followers",
"following_url": "https://api.github.com/users/Syavaprd/following{/other_user}",
"gists_url": "https://api.github.com/users/Syavaprd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Syavaprd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Syavaprd/subscriptions",
"organizations_url": "https://api.github.com/users/Syavaprd/orgs",
"repos_url": "https://api.github.com/users/Syavaprd/repos",
"events_url": "https://api.github.com/users/Syavaprd/events{/privacy}",
"received_events_url": "https://api.github.com/users/Syavaprd/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"So in short:\r\n- in public open-source projects like this you can say \"Hi all\" or \"Hi folks\", this way you'll address contributors of all genders\r\n- also, do you mind closing this issue and opening a thread on the forum at https://discuss.huggingface.co? We keep the issues for bug reports and feature requests which this issue is not.\r\n- last note for your future issues on open-source projects: when there is an issue template like here you should fill it that's typically the first thing the maintainers will ask you if you didn't do it.\r\n\r\nThis looks like a nice project good luck with it and I hope you'll succeed!"
] | 1,610 | 1,610 | 1,610 | NONE | null | Hi all, help me please. I'm trying to solve multilingual WiC problem using XLM-R.
I have a word and a sentence, that has this word, but may be in different form. So I want to find the position of its token ids in an encoded sentence.
There problems with Chinese, cause they don't have space between many symbols and tokenizer can encode a pair of symbols differently. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9658/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9658/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9657 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9657/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9657/comments | https://api.github.com/repos/huggingface/transformers/issues/9657/events | https://github.com/huggingface/transformers/issues/9657 | 788,234,700 | MDU6SXNzdWU3ODgyMzQ3MDA= | 9,657 | ModuleAttributeError occurs during Converting TensorFlow Checkpoints (BERT) | {
"login": "forest1988",
"id": 2755894,
"node_id": "MDQ6VXNlcjI3NTU4OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2755894?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/forest1988",
"html_url": "https://github.com/forest1988",
"followers_url": "https://api.github.com/users/forest1988/followers",
"following_url": "https://api.github.com/users/forest1988/following{/other_user}",
"gists_url": "https://api.github.com/users/forest1988/gists{/gist_id}",
"starred_url": "https://api.github.com/users/forest1988/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/forest1988/subscriptions",
"organizations_url": "https://api.github.com/users/forest1988/orgs",
"repos_url": "https://api.github.com/users/forest1988/repos",
"events_url": "https://api.github.com/users/forest1988/events{/privacy}",
"received_events_url": "https://api.github.com/users/forest1988/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, how did you obtain your TensorFlow checkpoint? Was it trained with http://github.com/google-research/bert?",
"Hi @LysandreJik,\r\n\r\nThank you for giving me the comment.\r\n\r\nThe TensorFlow checkpoint is not my own but is provided by a researcher. There may be my misreading about the related paper, but in the paper the researcher says:\r\n\r\n- They fine-tune all the parameters including the BERT and the two additional linear layers.\r\n- They directly used public pretrained parameters of BERT from https://github.com/google-research/bert\r\n\r\nFrom the information in the paper, I think the checkpoint is trained with the URL you showed me.",
"I'm having a hard time reproducing the issue, the following works:\r\n\r\n```\r\ntransformers-cli convert --model_type bert \\\r\n --tf_checkpoint bert_model.ckpt \\\r\n --config bert_config.json \\\r\n --pytorch_dump_output pytorch_model.bin\r\n\r\n```\r\n\r\non both of these:\r\n\r\n```\r\nBERT-Base, Uncased: 12-layer, 768-hidden, 12-heads, 110M parameters\r\nBERT-Large, Uncased: 24-layer, 1024-hidden, 16-heads, 340M parameters\r\n```\r\n\r\nDo you know of any difference between those architectures and the one you have?",
"Thank you for taking your time to reproduce this issue.\r\n\r\nThe checkpoint is using `uncased_L-12_H-768_A-12/bert_model.ckpt` as an initial checkpoint for fine-tuning.\r\nHence, the checkpoint seems to be an `Uncased` model.\r\n\r\nThe `BertConfig` of the checkpoint says the architecture has the following parameters:\r\n\r\n```\r\n12-layer, 768-hidden, 12-heads\r\n```\r\n\r\nFor your reference, the items below in the fine-tuned checkpoint folder (which I referred to as`$MODEL_DIR`).\r\n``` sh\r\n****@**** $ ls\r\ncheckpoint model.ckpt.data-00000-of-00001 model.ckpt.index model.ckpt.meta\r\n```\r\n\r\nI would like to try converting on another checkpoint as well and see if I get the same problem.\r\n",
"```\r\n****@**** $ pwd\r\n/****/uncased_L-12_H-768_A-12\r\n****@**** $ ls\r\nbert_config.json bert_model.ckpt.data-00000-of-00001 bert_model.ckpt.index bert_model.ckpt.meta vocab.txt\r\n****@**** $ transformers-cli convert --model_type bert \\\r\n> --tf_checkpoint bert_model.ckpt \\\r\n> --config bert_config.json \\\r\n> --pytorch_dump_output pytorch_model.bin\r\n```\r\n\r\nIt worked without any error, and showed `Save PyTorch model to pytorch_model.bin`.\r\n\r\nIs it possible that `vocab.txt` needs to be in the same folder as `ckpt`, not in the same folder as `bert_config.json`?\r\nI'm sorry if I'm missing the point.",
"So it worked with the second BERT model but not with the first? Do you know of any difference between the first and second?\r\n\r\nThe `vocab.txt` shouldn't have an impact; this is for the tokenizer and it can be automatically loaded by the `BertTokenizer`",
"Yes, it worked with the second one but not with the first one.\r\n- the first BERT model: fine-tuned and provided by a third-party, using the official pre-trained model as an initial point.\r\n- the second BERT model: official pre-trained model provided in https://github.com/google-research/bert\r\n\r\nThank you for telling me that `vocab.txt` is not the cause.\r\nBefore I saw your comment, I had tried putting vocab.txt in the same folder, but I still got the same error.\r\n\r\nThe output during the conversion of the first model is as below.\r\nIt seems `bert/embeddings` is skipped.\r\n\r\n``` sh\r\n2021-01-19 15:00:49.880931: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1\r\nBuilding PyTorch model from configuration: BertConfig {\r\n \"attention_probs_dropout_prob\": 0.1,\r\n \"gradient_checkpointing\": false,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout_prob\": 0.1,\r\n \"hidden_size\": 768,\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 3072,\r\n \"layer_norm_eps\": 1e-12,\r\n \"max_position_embeddings\": 512,\r\n \"model_type\": \"bert\",\r\n \"num_attention_heads\": 12,\r\n \"num_hidden_layers\": 12,\r\n \"pad_token_id\": 0,\r\n \"position_embedding_type\": \"absolute\",\r\n \"type_vocab_size\": 2,\r\n \"vocab_size\": 30522\r\n}\r\n\r\nConverting TensorFlow checkpoint from /****/model.ckpt\r\nLoading TF weight bert/embeddings/LayerNorm/beta with shape [768]\r\nLoading TF weight bert/embeddings/LayerNorm/beta/adam_m with shape [768]\r\nLoading TF weight bert/embeddings/LayerNorm/beta/adam_v with shape [768]\r\nLoading TF weight bert/embeddings/LayerNorm/gamma with shape [768]\r\nLoading TF weight bert/embeddings/LayerNorm/gamma/adam_m with shape [768]\r\nLoading TF weight bert/embeddings/LayerNorm/gamma/adam_v with shape [768]\r\nLoading TF weight bert/embeddings/position_embeddings with shape [512, 768]\r\nLoading TF weight bert/embeddings/position_embeddings/adam_m with shape [512, 768]\r\nLoading TF weight bert/embeddings/position_embeddings/adam_v with shape [512, 768]\r\nLoading TF weight bert/embeddings/relation_embedding with shape [47, 768]\r\nLoading TF weight bert/embeddings/token_type_embeddings with shape [2, 768]\r\nLoading TF weight bert/embeddings/token_type_embeddings/adam_m with shape [2, 768]\r\nLoading TF weight bert/embeddings/token_type_embeddings/adam_v with shape [2, 768]\r\nLoading TF weight bert/embeddings/word_embeddings with shape [30522, 768]\r\n2021-01-19 15:00:58.056232: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 93763584 exceeds 10% of free system memory.\r\nLoading TF weight bert/embeddings/word_embeddings/adam_m with shape [30522, 768]\r\n2021-01-19 15:00:58.912772: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 93763584 exceeds 10% of free system memory.\r\nLoading TF weight bert/embeddings/word_embeddings/adam_v with shape [30522, 768]\r\n2021-01-19 15:00:59.755334: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 93763584 exceeds 10% of free system memory.\r\nLoading TF weight bert/encoder/layer_0/attention/output/LayerNorm/beta with shape [768]\r\nLoading TF weight bert/encoder/layer_0/attention/output/LayerNorm/beta/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_0/attention/output/LayerNorm/beta/adam_v with shape [768] \r\nLoading TF weight bert/encoder/layer_0/attention/output/dense/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_0/attention/output/dense/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_0/attention/output/dense/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_0/attention/output/dense/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_0/attention/output/dense/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_0/attention/output/dense/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_0/attention/self/key/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_0/attention/self/key/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_0/attention/self/key/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_0/attention/self/key/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_0/attention/self/key/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_0/attention/self/key/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_0/attention/self/query/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_0/attention/self/query/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_0/attention/self/query/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_0/attention/self/query/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_0/attention/self/query/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_0/attention/self/query/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_0/attention/self/value/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_0/attention/self/value/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_0/attention/self/value/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_0/attention/self/value/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_0/attention/self/value/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_0/attention/self/value/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_0/intermediate/dense/bias with shape [3072]\r\nLoading TF weight bert/encoder/layer_0/intermediate/dense/bias/adam_m with shape [3072]\r\nLoading TF weight bert/encoder/layer_0/intermediate/dense/bias/adam_v with shape [3072]\r\nLoading TF weight bert/encoder/layer_0/intermediate/dense/kernel with shape [768, 3072]\r\nLoading TF weight bert/encoder/layer_0/intermediate/dense/kernel/adam_m with shape [768, 3072]\r\nLoading TF weight bert/encoder/layer_0/intermediate/dense/kernel/adam_v with shape [768, 3072]\r\nLoading TF weight bert/encoder/layer_0/output/LayerNorm/beta with shape [768]\r\nLoading TF weight bert/encoder/layer_0/output/LayerNorm/beta/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_0/output/LayerNorm/beta/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_0/output/LayerNorm/gamma with shape [768]\r\nLoading TF weight bert/encoder/layer_0/output/LayerNorm/gamma/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_0/output/LayerNorm/gamma/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_0/output/dense/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_0/output/dense/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_0/output/dense/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_0/output/dense/kernel with shape [3072, 768]\r\nLoading TF weight bert/encoder/layer_0/output/dense/kernel/adam_m with shape [3072, 768]\r\nLoading TF weight bert/encoder/layer_0/output/dense/kernel/adam_v with shape [3072, 768]\r\nLoading TF weight bert/encoder/layer_1/attention/output/LayerNorm/beta with shape [768]\r\nLoading TF weight bert/encoder/layer_1/attention/output/LayerNorm/beta/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_1/attention/output/LayerNorm/beta/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_1/attention/output/LayerNorm/gamma with shape [768]\r\nLoading TF weight bert/encoder/layer_1/attention/output/LayerNorm/gamma/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_1/attention/output/LayerNorm/gamma/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_1/attention/output/dense/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_1/attention/output/dense/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_1/attention/output/dense/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_1/attention/output/dense/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_1/attention/output/dense/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_1/attention/output/dense/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_1/attention/self/key/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_1/attention/self/key/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_1/attention/self/key/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_1/attention/self/key/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_1/attention/self/key/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_1/attention/self/key/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_1/attention/self/query/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_1/attention/self/query/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_1/attention/self/query/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_1/attention/self/query/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_1/attention/self/query/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_1/attention/self/query/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_1/attention/self/value/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_1/attention/self/value/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_1/attention/self/value/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_1/attention/self/value/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_1/attention/self/value/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_1/attention/self/value/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_1/intermediate/dense/bias with shape [3072]\r\nLoading TF weight bert/encoder/layer_1/intermediate/dense/bias/adam_m with shape [3072]\r\nLoading TF weight bert/encoder/layer_1/intermediate/dense/bias/adam_v with shape [3072]\r\nLoading TF weight bert/encoder/layer_1/intermediate/dense/kernel with shape [768, 3072]\r\nLoading TF weight bert/encoder/layer_1/intermediate/dense/kernel/adam_m with shape [768, 3072]\r\nLoading TF weight bert/encoder/layer_1/intermediate/dense/kernel/adam_v with shape [768, 3072]\r\nLoading TF weight bert/encoder/layer_1/output/LayerNorm/beta with shape [768]\r\nLoading TF weight bert/encoder/layer_1/output/LayerNorm/beta/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_1/output/LayerNorm/beta/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_1/output/LayerNorm/gamma with shape [768]\r\nLoading TF weight bert/encoder/layer_1/output/LayerNorm/gamma/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_1/output/LayerNorm/gamma/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_1/output/dense/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_1/output/dense/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_1/output/dense/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_1/output/dense/kernel with shape [3072, 768]\r\nLoading TF weight bert/encoder/layer_1/output/dense/kernel/adam_m with shape [3072, 768]\r\nLoading TF weight bert/encoder/layer_1/output/dense/kernel/adam_v with shape [3072, 768]\r\nLoading TF weight bert/encoder/layer_10/attention/output/LayerNorm/beta with shape [768]\r\nLoading TF weight bert/encoder/layer_10/attention/output/LayerNorm/beta/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_10/attention/output/LayerNorm/beta/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_10/attention/output/LayerNorm/gamma with shape [768]\r\nLoading TF weight bert/encoder/layer_10/attention/output/LayerNorm/gamma/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_10/attention/output/LayerNorm/gamma/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_10/attention/output/dense/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_10/attention/output/dense/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_10/attention/output/dense/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_10/attention/output/dense/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_10/attention/output/dense/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_10/attention/output/dense/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_10/attention/self/key/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_10/attention/self/key/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_10/attention/self/key/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_10/attention/self/key/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_10/attention/self/key/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_10/attention/self/key/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_10/attention/self/query/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_10/attention/self/query/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_10/attention/self/query/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_10/attention/self/query/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_10/attention/self/query/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_10/attention/self/query/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_10/attention/self/value/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_10/attention/self/value/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_10/attention/self/value/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_10/attention/self/value/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_10/attention/self/value/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_10/attention/self/value/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_10/intermediate/dense/bias with shape [3072]\r\nLoading TF weight bert/encoder/layer_10/intermediate/dense/bias/adam_m with shape [3072]\r\nLoading TF weight bert/encoder/layer_10/intermediate/dense/bias/adam_v with shape [3072]\r\nLoading TF weight bert/encoder/layer_10/intermediate/dense/kernel with shape [768, 3072]\r\nLoading TF weight bert/encoder/layer_10/intermediate/dense/kernel/adam_m with shape [768, 3072]\r\nLoading TF weight bert/encoder/layer_10/intermediate/dense/kernel/adam_v with shape [768, 3072]\r\nLoading TF weight bert/encoder/layer_10/output/LayerNorm/beta with shape [768]\r\nLoading TF weight bert/encoder/layer_10/output/LayerNorm/beta/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_10/output/LayerNorm/beta/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_10/output/LayerNorm/gamma with shape [768]\r\nLoading TF weight bert/encoder/layer_10/output/LayerNorm/gamma/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_10/output/LayerNorm/gamma/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_10/output/dense/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_10/output/dense/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_10/output/dense/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_10/output/dense/kernel with shape [3072, 768]\r\nLoading TF weight bert/encoder/layer_10/output/dense/kernel/adam_m with shape [3072, 768]\r\nLoading TF weight bert/encoder/layer_10/output/dense/kernel/adam_v with shape [3072, 768]\r\nLoading TF weight bert/encoder/layer_11/attention/output/LayerNorm/beta with shape [768]\r\nLoading TF weight bert/encoder/layer_11/attention/output/LayerNorm/beta/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_11/attention/output/LayerNorm/beta/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_11/attention/output/LayerNorm/gamma with shape [768]\r\nLoading TF weight bert/encoder/layer_11/attention/output/LayerNorm/gamma/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_11/attention/output/LayerNorm/gamma/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_11/attention/output/dense/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_11/attention/output/dense/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_11/attention/output/dense/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_11/attention/output/dense/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_11/attention/output/dense/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_11/attention/output/dense/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_11/attention/self/key/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_11/attention/self/key/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_11/attention/self/key/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_11/attention/self/key/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_11/attention/self/key/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_11/attention/self/key/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_11/attention/self/query/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_11/attention/self/query/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_11/attention/self/query/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_11/attention/self/query/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_11/attention/self/query/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_11/attention/self/query/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_11/attention/self/value/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_11/attention/self/value/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_11/attention/self/value/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_11/attention/self/value/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_11/attention/self/value/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_11/attention/self/value/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_11/intermediate/dense/bias with shape [3072]\r\nLoading TF weight bert/encoder/layer_11/intermediate/dense/bias/adam_m with shape [3072]\r\nLoading TF weight bert/encoder/layer_11/intermediate/dense/bias/adam_v with shape [3072]\r\nLoading TF weight bert/encoder/layer_11/intermediate/dense/kernel with shape [768, 3072]\r\nLoading TF weight bert/encoder/layer_11/intermediate/dense/kernel/adam_m with shape [768, 3072]\r\nLoading TF weight bert/encoder/layer_11/intermediate/dense/kernel/adam_v with shape [768, 3072]\r\nLoading TF weight bert/encoder/layer_11/output/LayerNorm/beta with shape [768]\r\nLoading TF weight bert/encoder/layer_11/output/LayerNorm/beta/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_11/output/LayerNorm/beta/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_11/output/LayerNorm/gamma with shape [768]\r\nLoading TF weight bert/encoder/layer_11/output/LayerNorm/gamma/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_11/output/LayerNorm/gamma/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_11/output/dense/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_11/output/dense/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_11/output/dense/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_11/output/dense/kernel with shape [3072, 768]\r\nLoading TF weight bert/encoder/layer_11/output/dense/kernel/adam_m with shape [3072, 768]\r\nLoading TF weight bert/encoder/layer_11/output/dense/kernel/adam_v with shape [3072, 768]\r\nLoading TF weight bert/encoder/layer_2/attention/output/LayerNorm/beta with shape [768]\r\nLoading TF weight bert/encoder/layer_2/attention/output/LayerNorm/beta/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_2/attention/output/LayerNorm/beta/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_2/attention/output/LayerNorm/gamma with shape [768]\r\nLoading TF weight bert/encoder/layer_2/attention/output/LayerNorm/gamma/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_2/attention/output/LayerNorm/gamma/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_2/attention/output/dense/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_2/attention/output/dense/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_2/attention/output/dense/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_2/attention/output/dense/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_2/attention/output/dense/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_2/attention/output/dense/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_2/attention/self/key/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_2/attention/self/key/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_2/attention/self/key/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_2/attention/self/key/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_2/attention/self/key/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_2/attention/self/key/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_2/attention/self/key/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_2/attention/self/key/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_2/attention/self/key/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_2/attention/self/key/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_2/attention/self/key/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_2/attention/self/key/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_2/attention/self/query/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_2/attention/self/query/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_2/attention/self/query/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_2/attention/self/query/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_2/attention/self/query/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_2/attention/self/query/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_2/attention/self/value/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_2/attention/self/value/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_2/attention/self/value/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_2/attention/self/value/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_2/attention/self/value/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_2/attention/self/value/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_2/intermediate/dense/bias with shape [3072]\r\nLoading TF weight bert/encoder/layer_2/intermediate/dense/bias/adam_m with shape [3072]\r\nLoading TF weight bert/encoder/layer_2/intermediate/dense/bias/adam_v with shape [3072]\r\nLoading TF weight bert/encoder/layer_2/intermediate/dense/kernel with shape [768, 3072]\r\nLoading TF weight bert/encoder/layer_2/intermediate/dense/kernel/adam_m with shape [768, 3072]\r\nLoading TF weight bert/encoder/layer_2/intermediate/dense/kernel/adam_v with shape [768, 3072]\r\nLoading TF weight bert/encoder/layer_2/output/LayerNorm/beta with shape [768]\r\nLoading TF weight bert/encoder/layer_2/output/LayerNorm/beta/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_2/output/LayerNorm/beta/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_2/output/LayerNorm/gamma with shape [768]\r\nLoading TF weight bert/encoder/layer_2/output/LayerNorm/gamma/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_2/output/LayerNorm/gamma/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_2/output/dense/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_2/output/dense/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_2/output/dense/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_2/output/dense/kernel with shape [3072, 768]\r\nLoading TF weight bert/encoder/layer_2/output/dense/kernel/adam_m with shape [3072, 768]\r\nLoading TF weight bert/encoder/layer_2/output/dense/kernel/adam_v with shape [3072, 768]\r\nLoading TF weight bert/encoder/layer_3/attention/output/LayerNorm/beta with shape [768]\r\nLoading TF weight bert/encoder/layer_3/attention/output/LayerNorm/beta/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_3/attention/output/LayerNorm/beta/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_3/attention/output/LayerNorm/gamma with shape [768]\r\nLoading TF weight bert/encoder/layer_3/attention/output/LayerNorm/gamma/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_3/attention/output/LayerNorm/gamma/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_3/attention/output/dense/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_3/attention/output/dense/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_3/attention/output/dense/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_3/attention/output/dense/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_3/attention/output/dense/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_3/attention/output/dense/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_3/attention/self/key/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_3/attention/self/key/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_3/attention/self/key/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_3/attention/self/key/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_3/attention/self/key/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_3/attention/self/key/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_3/attention/self/query/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_3/attention/self/query/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_3/attention/self/query/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_3/attention/self/query/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_3/attention/self/query/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_3/attention/self/query/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_3/attention/self/value/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_3/attention/self/value/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_3/attention/self/value/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_3/attention/self/value/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_3/attention/self/value/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_3/attention/self/value/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_3/intermediate/dense/bias with shape [3072]\r\nLoading TF weight bert/encoder/layer_3/intermediate/dense/bias/adam_m with shape [3072]\r\nLoading TF weight bert/encoder/layer_3/intermediate/dense/bias/adam_v with shape [3072]\r\nLoading TF weight bert/encoder/layer_3/intermediate/dense/kernel with shape [768, 3072]\r\nLoading TF weight bert/encoder/layer_3/intermediate/dense/kernel/adam_m with shape [768, 3072]\r\nLoading TF weight bert/encoder/layer_3/intermediate/dense/kernel/adam_v with shape [768, 3072]\r\nLoading TF weight bert/encoder/layer_3/output/LayerNorm/beta with shape [768]\r\nLoading TF weight bert/encoder/layer_3/output/LayerNorm/beta/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_3/output/LayerNorm/beta/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_3/output/LayerNorm/gamma with shape [768]\r\nLoading TF weight bert/encoder/layer_3/output/LayerNorm/gamma/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_3/output/LayerNorm/gamma/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_3/output/dense/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_3/output/dense/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_3/output/dense/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_3/output/dense/kernel with shape [3072, 768]\r\nLoading TF weight bert/encoder/layer_3/output/dense/kernel/adam_m with shape [3072, 768]\r\nLoading TF weight bert/encoder/layer_3/output/dense/kernel/adam_v with shape [3072, 768]\r\nLoading TF weight bert/encoder/layer_4/attention/output/LayerNorm/beta with shape [768]\r\nLoading TF weight bert/encoder/layer_4/attention/output/LayerNorm/beta/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_4/attention/output/LayerNorm/beta/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_4/attention/output/LayerNorm/gamma with shape [768]\r\nLoading TF weight bert/encoder/layer_4/attention/output/LayerNorm/gamma/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_4/attention/output/LayerNorm/gamma/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_4/attention/output/dense/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_4/attention/output/dense/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_4/attention/output/dense/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_4/attention/output/dense/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_4/attention/output/dense/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_4/attention/output/dense/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_4/attention/self/key/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_4/attention/self/key/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_4/attention/self/key/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_4/attention/self/key/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_4/attention/self/key/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_4/attention/self/key/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_4/attention/self/query/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_4/attention/self/query/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_4/attention/self/query/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_4/attention/self/query/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_4/attention/self/query/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_4/attention/self/query/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_4/attention/self/value/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_4/attention/self/value/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_4/attention/self/value/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_4/attention/self/value/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_4/attention/self/value/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_4/attention/self/value/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_4/intermediate/dense/bias with shape [3072]\r\nLoading TF weight bert/encoder/layer_4/intermediate/dense/bias/adam_m with shape [3072]\r\nLoading TF weight bert/encoder/layer_4/intermediate/dense/bias/adam_v with shape [3072]\r\nLoading TF weight bert/encoder/layer_4/intermediate/dense/kernel with shape [768, 3072]\r\nLoading TF weight bert/encoder/layer_4/intermediate/dense/kernel/adam_m with shape [768, 3072]\r\nLoading TF weight bert/encoder/layer_4/intermediate/dense/kernel/adam_v with shape [768, 3072]\r\nLoading TF weight bert/encoder/layer_4/output/LayerNorm/beta with shape [768]\r\nLoading TF weight bert/encoder/layer_4/output/LayerNorm/beta/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_4/output/LayerNorm/beta/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_4/output/LayerNorm/gamma with shape [768]\r\nLoading TF weight bert/encoder/layer_4/output/LayerNorm/gamma/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_4/output/LayerNorm/gamma/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_4/output/dense/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_4/output/dense/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_4/output/dense/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_4/output/dense/kernel with shape [3072, 768]\r\nLoading TF weight bert/encoder/layer_4/output/dense/kernel/adam_m with shape [3072, 768]\r\nLoading TF weight bert/encoder/layer_4/output/dense/kernel/adam_v with shape [3072, 768]\r\nLoading TF weight bert/encoder/layer_5/attention/output/LayerNorm/beta with shape [768]\r\nLoading TF weight bert/encoder/layer_5/attention/output/LayerNorm/beta/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_5/attention/output/LayerNorm/beta/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_5/attention/output/LayerNorm/gamma with shape [768]\r\nLoading TF weight bert/encoder/layer_5/attention/output/LayerNorm/gamma/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_5/attention/output/LayerNorm/gamma/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_5/attention/output/dense/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_5/attention/output/dense/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_5/attention/output/dense/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_5/attention/output/dense/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_5/attention/output/dense/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_5/attention/output/dense/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_5/attention/self/key/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_5/attention/self/key/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_5/attention/self/key/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_5/attention/self/key/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_5/attention/self/key/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_5/attention/self/key/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_5/attention/self/query/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_5/attention/self/query/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_5/attention/self/query/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_5/attention/self/query/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_5/attention/self/query/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_5/attention/self/query/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_5/attention/self/value/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_5/attention/self/value/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_5/attention/self/value/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_5/attention/self/value/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_5/attention/self/value/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_5/attention/self/value/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_5/intermediate/dense/bias with shape [3072]\r\nLoading TF weight bert/encoder/layer_5/intermediate/dense/bias/adam_m with shape [3072]\r\nLoading TF weight bert/encoder/layer_5/intermediate/dense/bias/adam_v with shape [3072]\r\nLoading TF weight bert/encoder/layer_5/intermediate/dense/kernel with shape [768, 3072]\r\nLoading TF weight bert/encoder/layer_5/intermediate/dense/kernel/adam_m with shape [768, 3072]\r\nLoading TF weight bert/encoder/layer_5/intermediate/dense/kernel/adam_v with shape [768, 3072]\r\nLoading TF weight bert/encoder/layer_5/output/LayerNorm/beta with shape [768]\r\nLoading TF weight bert/encoder/layer_5/output/LayerNorm/beta/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_5/output/LayerNorm/beta/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_5/output/LayerNorm/gamma with shape [768]\r\nLoading TF weight bert/encoder/layer_5/output/LayerNorm/gamma/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_5/output/LayerNorm/gamma/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_5/output/dense/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_5/output/dense/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_5/output/dense/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_5/output/dense/kernel with shape [3072, 768]\r\nLoading TF weight bert/encoder/layer_5/output/dense/kernel/adam_m with shape [3072, 768]\r\nLoading TF weight bert/encoder/layer_5/output/dense/kernel/adam_v with shape [3072, 768]\r\nLoading TF weight bert/encoder/layer_6/attention/output/LayerNorm/beta with shape [768]\r\nLoading TF weight bert/encoder/layer_6/attention/output/LayerNorm/beta/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_6/attention/output/LayerNorm/beta/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_6/attention/output/LayerNorm/gamma with shape [768]\r\nLoading TF weight bert/encoder/layer_6/attention/output/LayerNorm/gamma/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_6/attention/output/LayerNorm/gamma/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_6/attention/output/dense/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_6/attention/output/dense/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_6/attention/output/dense/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_6/attention/output/dense/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_6/attention/output/dense/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_6/attention/output/dense/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_6/attention/self/key/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_6/attention/self/key/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_6/attention/self/key/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_6/attention/self/key/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_6/attention/self/key/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_6/attention/self/key/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_6/attention/self/query/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_6/attention/self/query/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_6/attention/self/query/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_6/attention/self/query/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_6/attention/self/query/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_6/attention/self/query/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_6/attention/self/value/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_6/attention/self/value/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_6/attention/self/value/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_6/attention/self/value/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_6/attention/self/value/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_6/attention/self/value/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_6/intermediate/dense/bias with shape [3072]\r\nLoading TF weight bert/encoder/layer_6/intermediate/dense/bias/adam_m with shape [3072]\r\nLoading TF weight bert/encoder/layer_6/intermediate/dense/bias/adam_v with shape [3072]\r\nLoading TF weight bert/encoder/layer_6/intermediate/dense/kernel with shape [768, 3072]\r\nLoading TF weight bert/encoder/layer_6/intermediate/dense/kernel/adam_m with shape [768, 3072]\r\nLoading TF weight bert/encoder/layer_6/intermediate/dense/kernel/adam_v with shape [768, 3072]\r\nLoading TF weight bert/encoder/layer_6/output/LayerNorm/beta with shape [768]\r\nLoading TF weight bert/encoder/layer_6/output/LayerNorm/beta/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_6/output/LayerNorm/beta/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_6/output/LayerNorm/gamma with shape [768]\r\nLoading TF weight bert/encoder/layer_6/output/LayerNorm/gamma/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_6/output/LayerNorm/gamma/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_6/output/dense/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_6/output/dense/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_6/output/dense/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_6/output/dense/kernel with shape [3072, 768]\r\nLoading TF weight bert/encoder/layer_6/output/dense/kernel/adam_m with shape [3072, 768]\r\nLoading TF weight bert/encoder/layer_6/output/dense/kernel/adam_v with shape [3072, 768]\r\nLoading TF weight bert/encoder/layer_7/attention/output/LayerNorm/beta with shape [768]\r\nLoading TF weight bert/encoder/layer_7/attention/output/LayerNorm/beta/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_7/attention/output/LayerNorm/beta/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_7/attention/output/LayerNorm/gamma with shape [768]\r\nLoading TF weight bert/encoder/layer_7/attention/output/LayerNorm/gamma/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_7/attention/output/LayerNorm/gamma/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_7/attention/output/dense/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_7/attention/output/dense/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_7/attention/output/dense/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_7/attention/output/dense/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_7/attention/output/dense/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_7/attention/output/dense/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_7/attention/self/key/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_7/attention/self/key/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_7/attention/self/key/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_7/attention/self/key/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_7/attention/self/key/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_7/attention/self/key/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_7/attention/self/query/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_7/attention/self/query/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_7/attention/self/query/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_7/attention/self/query/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_7/attention/self/query/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_7/attention/self/query/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_7/attention/self/value/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_7/attention/self/value/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_7/attention/self/value/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_7/attention/self/value/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_7/attention/self/value/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_7/attention/self/value/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_7/intermediate/dense/bias with shape [3072]\r\nLoading TF weight bert/encoder/layer_7/intermediate/dense/bias/adam_m with shape [3072]\r\nLoading TF weight bert/encoder/layer_7/intermediate/dense/bias/adam_v with shape [3072]\r\nLoading TF weight bert/encoder/layer_7/intermediate/dense/kernel with shape [768, 3072]\r\nLoading TF weight bert/encoder/layer_7/intermediate/dense/kernel/adam_m with shape [768, 3072]\r\nLoading TF weight bert/encoder/layer_7/intermediate/dense/kernel/adam_v with shape [768, 3072]\r\nLoading TF weight bert/encoder/layer_7/output/LayerNorm/beta with shape [768]\r\nLoading TF weight bert/encoder/layer_7/output/LayerNorm/beta/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_7/output/LayerNorm/beta/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_7/output/LayerNorm/gamma with shape [768]\r\nLoading TF weight bert/encoder/layer_7/output/LayerNorm/gamma/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_7/output/LayerNorm/gamma/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_7/output/dense/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_7/output/dense/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_7/output/dense/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_7/output/dense/kernel with shape [3072, 768]\r\nLoading TF weight bert/encoder/layer_7/output/dense/kernel/adam_m with shape [3072, 768]\r\nLoading TF weight bert/encoder/layer_7/output/dense/kernel/adam_v with shape [3072, 768]\r\nLoading TF weight bert/encoder/layer_8/attention/output/LayerNorm/beta with shape [768]\r\nLoading TF weight bert/encoder/layer_8/attention/output/LayerNorm/beta/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_8/attention/output/LayerNorm/beta/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_8/attention/output/LayerNorm/gamma with shape [768]\r\nLoading TF weight bert/encoder/layer_8/attention/output/LayerNorm/gamma/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_8/attention/output/LayerNorm/gamma/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_8/attention/output/dense/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_8/attention/output/dense/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_8/attention/output/dense/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_8/attention/output/dense/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_8/attention/output/dense/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_8/attention/output/dense/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_8/attention/self/key/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_8/attention/self/key/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_8/attention/self/key/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_8/attention/self/key/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_8/attention/self/key/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_8/attention/self/key/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_8/attention/self/query/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_8/attention/self/query/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_8/attention/self/query/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_8/attention/self/query/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_8/attention/self/query/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_8/attention/self/query/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_8/attention/self/value/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_8/attention/self/value/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_8/attention/self/value/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_8/attention/self/value/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_8/attention/self/value/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_8/attention/self/value/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_8/intermediate/dense/bias with shape [3072]\r\nLoading TF weight bert/encoder/layer_8/intermediate/dense/bias/adam_m with shape [3072]\r\nLoading TF weight bert/encoder/layer_8/intermediate/dense/bias/adam_v with shape [3072]\r\nLoading TF weight bert/encoder/layer_8/intermediate/dense/kernel with shape [768, 3072]\r\nLoading TF weight bert/encoder/layer_8/intermediate/dense/kernel/adam_m with shape [768, 3072]\r\nLoading TF weight bert/encoder/layer_8/intermediate/dense/kernel/adam_v with shape [768, 3072]\r\nLoading TF weight bert/encoder/layer_8/output/LayerNorm/beta with shape [768]\r\nLoading TF weight bert/encoder/layer_8/output/LayerNorm/beta/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_8/output/LayerNorm/beta/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_8/output/LayerNorm/gamma with shape [768]\r\nLoading TF weight bert/encoder/layer_8/output/LayerNorm/gamma/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_8/output/LayerNorm/gamma/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_8/output/dense/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_8/output/dense/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_8/output/dense/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_8/output/dense/kernel with shape [3072, 768]\r\nLoading TF weight bert/encoder/layer_8/output/dense/kernel/adam_m with shape [3072, 768]\r\nLoading TF weight bert/encoder/layer_8/output/dense/kernel/adam_v with shape [3072, 768]\r\nLoading TF weight bert/encoder/layer_9/attention/output/LayerNorm/beta with shape [768]\r\nLoading TF weight bert/encoder/layer_9/attention/output/LayerNorm/beta/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_9/attention/output/LayerNorm/beta/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_9/attention/output/LayerNorm/gamma with shape [768]\r\nLoading TF weight bert/encoder/layer_9/attention/output/LayerNorm/gamma/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_9/attention/output/LayerNorm/gamma/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_9/attention/output/dense/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_9/attention/output/dense/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_9/attention/output/dense/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_9/attention/output/dense/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_9/attention/output/dense/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_9/attention/output/dense/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_9/attention/self/key/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_9/attention/self/key/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_9/attention/self/key/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_9/attention/self/key/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_9/attention/self/key/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_9/attention/self/key/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_9/attention/self/query/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_9/attention/self/query/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_9/attention/self/query/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_9/attention/self/query/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_9/attention/self/query/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_9/attention/self/query/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_9/attention/self/value/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_9/attention/self/value/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_9/attention/self/value/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_9/attention/self/value/kernel with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_9/attention/self/value/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_9/attention/self/value/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/encoder/layer_9/intermediate/dense/bias with shape [3072]\r\nLoading TF weight bert/encoder/layer_9/intermediate/dense/bias/adam_m with shape [3072]\r\nLoading TF weight bert/encoder/layer_9/intermediate/dense/bias/adam_v with shape [3072]\r\nLoading TF weight bert/encoder/layer_9/intermediate/dense/kernel with shape [768, 3072]\r\nLoading TF weight bert/encoder/layer_9/intermediate/dense/kernel/adam_m with shape [768, 3072]\r\nLoading TF weight bert/encoder/layer_9/intermediate/dense/kernel/adam_v with shape [768, 3072]\r\nLoading TF weight bert/encoder/layer_9/output/LayerNorm/beta with shape [768]\r\nLoading TF weight bert/encoder/layer_9/output/LayerNorm/beta/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_9/output/LayerNorm/beta/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_9/output/LayerNorm/gamma with shape [768]\r\nLoading TF weight bert/encoder/layer_9/output/LayerNorm/gamma/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_9/output/LayerNorm/gamma/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_9/output/dense/bias with shape [768]\r\nLoading TF weight bert/encoder/layer_9/output/dense/bias/adam_m with shape [768]\r\nLoading TF weight bert/encoder/layer_9/output/dense/bias/adam_v with shape [768]\r\nLoading TF weight bert/encoder/layer_9/output/dense/kernel with shape [3072, 768]\r\nLoading TF weight bert/encoder/layer_9/output/dense/kernel/adam_m with shape [3072, 768]\r\nLoading TF weight bert/encoder/layer_9/output/dense/kernel/adam_v with shape [3072, 768]\r\nLoading TF weight bert/final_output/token_score/bias with shape [3]\r\nLoading TF weight bert/final_output/token_score/kernel with shape [768, 3]\r\nLoading TF weight bert/pooler/dense/bias with shape [768]\r\nLoading TF weight bert/pooler/dense/bias/adam_m with shape [768]\r\nLoading TF weight bert/pooler/dense/bias/adam_v with shape [768]\r\nLoading TF weight bert/pooler/dense/kernel with shape [768, 768]\r\nLoading TF weight bert/pooler/dense/kernel/adam_m with shape [768, 768]\r\nLoading TF weight bert/pooler/dense/kernel/adam_v with shape [768, 768]\r\nLoading TF weight bert/relation/bias with shape [768]\r\nLoading TF weight bert/relation/kernel with shape [768, 768]\r\nLoading TF weight global_step with shape []\r\nLoading TF weight loss/cls/predictions/output_bias with shape [30522]\r\nLoading TF weight loss/cls/predictions/output_bias/adam_m with shape [30522]\r\nLoading TF weight loss/cls/predictions/output_bias/adam_v with shape [30522]\r\nLoading TF weight loss/cls/predictions/transform/LayerNorm/beta with shape [768]\r\nLoading TF weight loss/cls/predictions/transform/LayerNorm/beta/adam_m with shape [768]\r\nLoading TF weight loss/cls/predictions/transform/LayerNorm/beta/adam_v with shape [768]\r\nLoading TF weight loss/cls/predictions/transform/LayerNorm/gamma with shape [768]\r\nLoading TF weight loss/cls/predictions/transform/LayerNorm/gamma/adam_m with shape [768]\r\nLoading TF weight loss/cls/predictions/transform/LayerNorm/gamma/adam_v with shape [768]\r\nLoading TF weight loss/cls/predictions/transform/dense/bias with shape [768]\r\nLoading TF weight loss/cls/predictions/transform/dense/bias/adam_m with shape [768]\r\nLoading TF weight loss/cls/predictions/transform/dense/bias/adam_v with shape [768]\r\nLoading TF weight loss/cls/predictions/transform/dense/kernel with shape [768, 768]\r\nLoading TF weight loss/cls/predictions/transform/dense/kernel/adam_m with shape [768, 768]\r\nLoading TF weight loss/cls/predictions/transform/dense/kernel/adam_v with shape [768, 768]\r\nLoading TF weight output_bias with shape [2]\r\nLoading TF weight output_bias/adam_m with shape [2]\r\nLoading TF weight output_bias/adam_v with shape [2]\r\nLoading TF weight output_weights with shape [2, 768]\r\nLoading TF weight output_weights/adam_m with shape [2, 768]\r\nLoading TF weight output_weights/adam_v with shape [2, 768]\r\nInitialize PyTorch weight ['bert', 'embeddings', 'LayerNorm', 'beta']\r\nSkipping bert/embeddings/LayerNorm/beta/adam_m\r\nSkipping bert/embeddings/LayerNorm/beta/adam_v\r\nInitialize PyTorch weight ['bert', 'embeddings', 'LayerNorm', 'gamma']\r\nSkipping bert/embeddings/LayerNorm/gamma/adam_m\r\nSkipping bert/embeddings/LayerNorm/gamma/adam_v\r\nInitialize PyTorch weight ['bert', 'embeddings', 'position_embeddings']\r\nSkipping bert/embeddings/position_embeddings/adam_m\r\nSkipping bert/embeddings/position_embeddings/adam_v\r\nSkipping bert/embeddings/relation_embedding\r\nTraceback (most recent call last):\r\n File \"/****/.pyenv/versions/anaconda3-2020.07/bin/transformers-cli\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/****/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/transformers/commands/transformers_cli.py\", line 51, in main\r\n service.run()\r\n File \"/****/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/transformers/commands/convert.py\", line 105, in run\r\n convert_tf_checkpoint_to_pytorch(self._tf_checkpoint, self._config, self._pytorch_dump_output)\r\n File \"/****/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/transformers/models/bert/convert_bert_original_tf_checkpoint_to_pytorch.py\", line 36, in convert_tf_checkpoint_to_pytorch\r\n load_tf_weights_in_bert(model, config, tf_checkpoint_path)\r\n File \"/****/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py\", line 155, in load_tf_weights_in_bert\r\n pointer.shape == array.shape\r\n File \"/****/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 778, in __getattr__\r\n raise ModuleAttributeError(\"'{}' object has no attribute '{}'\".format(\r\ntorch.nn.modules.module.ModuleAttributeError: 'BertEmbeddings' object has no attribute 'shape'\r\n```\r\n\r\n",
"Hmmm it seems your model contains an additional weight? `bert/embeddings/relation_embedding` is not in the PyTorch model.",
"I'd like to say thank you for considering this matter together.\r\n\r\nReferring to your comment, I've just now checked the open-sourced code of the model I'd like to convert to PyTorch.\r\nHowever, I cannot find `relation_embedding` there.\r\n\r\nMaybe the version of the fine-tuned model provided by the author is different from the published implementation.\r\n\r\nAs a test, I tried to convert the model which I had fine-tuned by myself using the author's published implementation.\r\nIn this case, the `relation_embedding` error did not occur, but the `Skipping global_step` caused an error shown below:\r\n\r\n``` sh\r\nModuleAttributeError: 'BertForPreTraining' object has no attribute 'bias'.\r\n```\r\n\r\nThis `global_step` is included in the author's published implementation, and I think it is defined by the author.\r\n\r\nI think I was able to load the model provided by the author with the code published by the author, but it may be my misunderstanding.\r\nI would like to verify that point as well.\r\n",
"Hmmm I understand.\r\n\r\nI don't think it's the `global_step`, as this gets skipped here:\r\n\r\nhttps://github.com/huggingface/transformers/blob/b020a736c374460af1b34267283f957988350630/src/transformers/models/bert/modeling_bert.py#L120-L125\r\n\r\nAs a way to debug what's happening here, could you add the following log statement:\r\n```py\r\nlogger.info(f\"Trying to assign {name}\")\r\n```\r\n\r\nright after the following line:\r\nhttps://github.com/huggingface/transformers/blob/b020a736c374460af1b34267283f957988350630/src/transformers/models/bert/modeling_bert.py#L116\r\n\r\nIt would then look like:\r\n\r\n```py\r\n for name, array in zip(names, arrays):\r\n logger.info(f\"Trying to assign {name}\")\r\n name = name.split(\"/\")\r\n # adam_v and adam_m are variables used in AdamWeightDecayOptimizer to calculated m and v\r\n # which are not required for using pretrained model\r\n if any(\r\n n in [\"adam_v\", \"adam_m\", \"AdamWeightDecayOptimizer\", \"AdamWeightDecayOptimizer_1\", \"global_step\"]\r\n for n in name\r\n ):\r\n```\r\n\r\nwe can then try to identify what's happening with the checkpoint.",
"Thanks! I was also just checking modeling_bert.py#L116-L125 now.\r\n\r\nIt seems the author's code skips a variable if its name is not in the model for loading.\r\n\r\nI've just read the `load_tf_weights_in_bert` used in `transformers-cli convert --model_type bert`, and understood how items such as `adam_v` and `adam_m` (which are not required for use pretrained model) are skipped.\r\n\r\nhttps://github.com/huggingface/transformers/blob/b020a736c374460af1b34267283f957988350630/src/transformers/models/bert/modeling_bert.py#L116-L125\r\n\r\nAt this time, I think I can skip the `relation_embedding` for my usage.\r\nHence, I'll try to modify the `convert` code to skip for this time. \r\n\r\nAlso, I'll try the snippet you kindly wrote!",
"I inserted `logger.info(f\"Trying to assign {name}\")` and got the following outputs.\r\n\r\nWhen try to convert the author provided fine-tuned model, the output is as below:\r\n\r\n```\r\nTrying to assign bert/embeddings/LayerNorm/beta\r\nInitialize PyTorch weight ['bert', 'embeddings', 'LayerNorm', 'beta']\r\nTrying to assign bert/embeddings/LayerNorm/beta/adam_m\r\nSkipping bert/embeddings/LayerNorm/beta/adam_m\r\nTrying to assign bert/embeddings/LayerNorm/beta/adam_v\r\nSkipping bert/embeddings/LayerNorm/beta/adam_v\r\nTrying to assign bert/embeddings/LayerNorm/gamma\r\nInitialize PyTorch weight ['bert', 'embeddings', 'LayerNorm', 'gamma']\r\nTrying to assign bert/embeddings/LayerNorm/gamma/adam_m\r\nSkipping bert/embeddings/LayerNorm/gamma/adam_m\r\nTrying to assign bert/embeddings/LayerNorm/gamma/adam_v\r\nSkipping bert/embeddings/LayerNorm/gamma/adam_v\r\nTrying to assign bert/embeddings/position_embeddings\r\nInitialize PyTorch weight ['bert', 'embeddings', 'position_embeddings']\r\nTrying to assign bert/embeddings/position_embeddings/adam_m\r\nSkipping bert/embeddings/position_embeddings/adam_m\r\nTrying to assign bert/embeddings/position_embeddings/adam_v\r\nSkipping bert/embeddings/position_embeddings/adam_v\r\nTrying to assign bert/embeddings/relation_embedding\r\nSkipping bert/embeddings/relation_embedding\r\nTraceback (most recent call last):\r\n```\r\n\r\nWhen try to convert my own fine-tuned model with the author's code, the output is as below:\r\n\r\n```\r\n...\r\nTrying to assign bert/encoder/layer_9/output/dense/bias/adam_v\r\nSkipping bert/encoder/layer_9/output/dense/bias/adam_v\r\nTrying to assign bert/encoder/layer_9/output/dense/kernel\r\nInitialize PyTorch weight ['bert', 'encoder', 'layer_9', 'output', 'dense', 'kernel']\r\nTrying to assign bert/encoder/layer_9/output/dense/kernel/adam_m\r\nSkipping bert/encoder/layer_9/output/dense/kernel/adam_m\r\nTrying to assign bert/encoder/layer_9/output/dense/kernel/adam_v\r\nSkipping bert/encoder/layer_9/output/dense/kernel/adam_v\r\nTrying to assign bert/pooler/dense/bias\r\nInitialize PyTorch weight ['bert', 'pooler', 'dense', 'bias']\r\nTrying to assign bert/pooler/dense/bias/adam_m\r\nSkipping bert/pooler/dense/bias/adam_m\r\nTrying to assign bert/pooler/dense/bias/adam_v\r\nSkipping bert/pooler/dense/bias/adam_v\r\nTrying to assign bert/pooler/dense/kernel\r\nInitialize PyTorch weight ['bert', 'pooler', 'dense', 'kernel']\r\nTrying to assign bert/pooler/dense/kernel/adam_m\r\nSkipping bert/pooler/dense/kernel/adam_m\r\nTrying to assign bert/pooler/dense/kernel/adam_v\r\nSkipping bert/pooler/dense/kernel/adam_v\r\nTrying to assign global_step\r\nSkipping global_step\r\nTrying to assign output_bias\r\nTraceback (most recent call last):\r\n```\r\n\r\nAs you have pointed out, what caused the error is not `global_step`, but `output_bias`.",
"It seems that `output_bias` is not the part of BERT, but of the linear layer, as the related paper said that the authors fine-tune all the parameters including the BERT and the two additional linear layers.\r\n\r\n```\r\nLoading TF weight output_bias with shape [2]\r\nLoading TF weight output_bias/adam_m with shape [2]\r\nLoading TF weight output_bias/adam_v with shape [2]\r\nLoading TF weight output_weights with shape [2, 768]\r\nLoading TF weight output_weights/adam_m with shape [2, 768]\r\nLoading TF weight output_weights/adam_v with shape [2, 768]\r\n```",
"Hmmm indeed it seems that the model doesn't fit one-to-one to our architecture. You might need to slightly tweak the architecture and conversion script to load it, but you're probably the most expert on the matter. If you want me to take a deeper look, feel free to send me the weights/config so I can take a look locally.",
"Thank you for your kind and encouraging comment!\r\nThanks to your advice, it seems that what is a problem I should solve becomes clear.\r\nI'll do my best to solve it!",
"Hi, \r\n\r\nSorry it's been a few days because I had another issue, but I am working on this issue again.\r\n\r\nI would like to ask one question about the relationship between `m_name` and `name`.\r\nI'm assuming that `name` is split into the parts of the name hierarchy and that `m_name` handles each part.\r\n`m_name` is referred after the `for` statement, is it safe to consider it as the same as `name[-1]`?\r\n\r\nIt seems that it is judged whether `m_name` is `_embedding` or `kernel`, is that correct?\r\nIs there any reason why `m_name` is used instead of `name[-1]` (after the end of the `for` statement)?\r\n\r\nhttps://github.com/huggingface/transformers/blob/9152f16023b59d262b51573714b40325c8e49370/src/transformers/models/bert/modeling_bert.py#L116-L152\r\n\r\nI added two log statements to check if there are differences in `m_name` (after `for` statement) and `name[-1]`, but cannot found them.\r\n\r\n`added`\r\n\r\n```python\r\n logger.info(f\"name: {name}\")\r\n logger.info(f\"m_name: {m_name}\")\r\n if m_name[-11:] == \"_embeddings\":\r\n pointer = getattr(pointer, \"weight\")\r\n elif m_name == \"kernel\":\r\n array = np.transpose(array)\r\n```\r\n\r\n`output`\r\n\r\n```\r\n2021-01-24 08:49:34,227 | INFO : Skipping bert/embeddings/LayerNorm/gamma/adam_m\r\n2021-01-24 08:49:34,227 | INFO : Skipping bert/embeddings/LayerNorm/gamma/adam_v\r\n2021-01-24 08:49:34,228 | INFO : name: ['bert', 'embeddings', 'position_embeddings']\r\n2021-01-24 08:49:34,229 | INFO : m_name: position_embeddings\r\n2021-01-24 08:49:34,230 | INFO : Initialize PyTorch weight ['bert', 'embeddings', 'position_embeddings']\r\n2021-01-24 08:49:34,231 | INFO : Skipping bert/embeddings/position_embeddings/adam_m\r\n2021-01-24 08:49:34,232 | INFO : Skipping bert/embeddings/position_embeddings/adam_v\r\n2021-01-24 08:49:34,233 | INFO : Skipping bert/embeddings/relation_embedding\r\n2021-01-24 08:49:34,233 | INFO : name: ['bert', 'embeddings', 'relation_embedding']\r\n2021-01-24 08:49:34,234 | INFO : m_name: relation_embedding\r\n```\r\n\r\n",
"Thanks to your advice, I think I've almost achieved the conversion I'm aiming for.\r\n\r\nI added a `force` option to force skip the unrelated items and save them separately as `npy.`\r\nI split the loop part form `load_tf_weights_in_bert`, and defined new function `getpointer`.\r\n\r\nIf `force` is `True,` some items that cannot be found in BERT is skipped and saved separately. \r\n\r\nHere is my code snippet.\r\n\r\n```python\r\ndef getpointer(pointer, m_name, name):\r\n if re.fullmatch(r\"[A-Za-z]+_\\d+\", m_name):\r\n scope_names = re.split(r\"_(\\d+)\", m_name)\r\n else:\r\n scope_names = [m_name]\r\n if scope_names[0] == \"kernel\" or scope_names[0] == \"gamma\":\r\n pointer = getattr(pointer, \"weight\")\r\n elif scope_names[0] == \"output_bias\" or scope_names[0] == \"beta\":\r\n pointer = getattr(pointer, \"bias\")\r\n elif scope_names[0] == \"output_weights\":\r\n pointer = getattr(pointer, \"weight\")\r\n elif scope_names[0] == \"squad\":\r\n pointer = getattr(pointer, \"classifier\")\r\n else:\r\n try:\r\n pointer = getattr(pointer, scope_names[0])\r\n except AttributeError:\r\n logger.info(\"Skipping {}\".format(\"/\".join(name)))\r\n # continue\r\n return pointer\r\n if len(scope_names) >= 2:\r\n num = int(scope_names[1])\r\n pointer = pointer[num]\r\n\r\n return pointer\r\n\r\n\r\ndef load_tf_weights_in_bert(model, config, tf_checkpoint_path, force=True, skipped_save_path=\"./skipped\"):\r\n \"\"\"Load tf checkpoints in a pytorch model.\"\"\"\r\n\r\n if force:\r\n logger.warning(\"The 'force' option is set to be True. It will force conversion even if the model types do not match.\")\r\n os.makedirs(os.path.join(skipped_save_path, \"skipped\"), exist_ok=True)\r\n skipped_names = []\r\n skipped_arrays = []\r\n\r\n try:\r\n import tensorflow as tf\r\n except ImportError:\r\n logger.error(\r\n \"Loading a TensorFlow model in PyTorch, requires TensorFlow to be installed. Please see \"\r\n \"https://www.tensorflow.org/install/ for installation instructions.\"\r\n )\r\n raise\r\n tf_path = os.path.abspath(tf_checkpoint_path)\r\n logger.info(\"Converting TensorFlow checkpoint from {}\".format(tf_path))\r\n # Load weights from TF model\r\n init_vars = tf.train.list_variables(tf_path)\r\n names = []\r\n arrays = []\r\n for name, shape in init_vars:\r\n logger.info(\"Loading TF weight {} with shape {}\".format(name, shape))\r\n array = tf.train.load_variable(tf_path, name)\r\n names.append(name)\r\n arrays.append(array)\r\n\r\n for name, array in zip(names, arrays):\r\n # logger.info(f\"Trying to assign {name}\")\r\n name = name.split(\"/\")\r\n # adam_v and adam_m are variables used in AdamWeightDecayOptimizer to calculated m and v\r\n # which are not required for using pretrained model\r\n if any(\r\n n in [\"adam_v\", \"adam_m\", \"AdamWeightDecayOptimizer\", \"AdamWeightDecayOptimizer_1\", \"global_step\"]\r\n for n in name\r\n ):\r\n logger.info(\"Skipping {}\".format(\"/\".join(name)))\r\n continue\r\n pointer = model\r\n for m_name in name:\r\n if force:\r\n # Skip unrelated items and save them separately.\r\n try:\r\n pointer = getpointer(pointer, m_name, name)\r\n except AttributeError:\r\n logger.info(\"Skipping {}\".format(\"/\".join(name)))\r\n skipped_names.append(name)\r\n skipped_arrays.append(array)\r\n else:\r\n pointer = getpointer(pointer, m_name, name)\r\n if m_name[-11:] == \"_embeddings\":\r\n pointer = getattr(pointer, \"weight\")\r\n elif m_name == \"kernel\":\r\n array = np.transpose(array)\r\n try:\r\n if force:\r\n try:\r\n pointer.shape\r\n except AttributeError:\r\n logger.info(\"Skipping {}\".format(\"/\".join(name)))\r\n skipped_names.append(name)\r\n skipped_arrays.append(array)\r\n else:\r\n assert (\r\n pointer.shape == array.shape\r\n ), f\"Pointer shape {pointer.shape} and array shape {array.shape} mismatched\"\r\n except AssertionError as e:\r\n e.args += (pointer.shape, array.shape)\r\n raise\r\n logger.info(\"Initialize PyTorch weight {}\".format(name))\r\n pointer.data = torch.from_numpy(array)\r\n\r\n if force:\r\n logger.info(\"Save force skipped files\")\r\n for name, array in zip(skipped_names, skipped_arrays):\r\n skipped_to_save = os.path.join(skipped_save_path, \"skipped\", \"-\".join(name) + \".npy\")\r\n logger.info(\"Save force skipped {} to {}\".format(\"/\".join(name), skipped_to_save))\r\n np.save(skipped_to_save, array)\r\n\r\n return model\r\n```\r\n\r\nThe conversion is done!\r\n\r\nIn my case, the skipped items are saved as follows.\r\n\r\n``` sh\r\n$ ls converted/skipped/\r\nbert-embeddings-relation_embedding.npy bert-final_output-token_score-kernel.npy bert-relation-kernel.npy output_weights.npy\r\nbert-final_output-token_score-bias.npy bert-relation-bias.npy output_bias.npy\r\n```\r\n\r\nI should manage these \"unrelated\" `np.array` files by making appropriate layers and write the arrays as weight and bias of the layer. \r\nMoreover, I have trouble loading the generated `pytorch_model.bin`. It is said that there is no `config.json` file. I'd like to check how to generate the correct one (just copying TF `bert_config.json` doesn't work) with converted `pytorch_model.bin`.\r\n\r\nThank you very much for your help!",
"Excuse me for my frequent posting.\r\n\r\nTo get the appropriate `config.json`, I've changed the last part of the conversion function, where the model is saved, to as follows (changed from `torch.save()` to `model.save_pretrained()`):\r\n\r\n```python\r\n # Save pytorch-model\r\n print(\"Save the PyTorch model and the config file to {}\".format(pytorch_dump_dir))\r\n # torch.save(model.state_dict(), pytorch_dump_path)\r\n model.save_pretrained(pytorch_dump_dir)\r\n```\r\n\r\n(The original code is in \r\nhttps://github.com/huggingface/transformers/blob/9152f16023b59d262b51573714b40325c8e49370/src/transformers/models/bert/convert_bert_original_tf_checkpoint_to_pytorch.py#L38-L40)\r\n\r\nThen, the output of my modified script is:\r\n\r\n```python\r\n...\r\n2021-01-25 04:27:00,442 | INFO : Initialize PyTorch weight ['output_bias']\r\n2021-01-25 04:27:00,442 | INFO : Skipping output_bias/adam_m\r\n2021-01-25 04:27:00,442 | INFO : Skipping output_bias/adam_v\r\n2021-01-25 04:27:00,443 | INFO : Skipping output_weights\r\n2021-01-25 04:27:00,443 | INFO : Skipping output_weights\r\n2021-01-25 04:27:00,444 | INFO : Initialize PyTorch weight ['output_weights']\r\n2021-01-25 04:27:00,444 | INFO : Skipping output_weights/adam_m\r\n2021-01-25 04:27:00,444 | INFO : Skipping output_weights/adam_v\r\n2021-01-25 04:27:00,445 | INFO : Save force skipped files\r\n2021-01-25 04:27:00,445 | INFO : Save force skipped bert/embeddings/relation_embedding to ./converted/skipped/bert-embeddings-relation_embedding.npy\r\n2021-01-25 04:27:00,456 | INFO : Save force skipped bert/final_output/token_score/bias to ./converted/skipped/bert-final_output-token_score-bias.npy\r\n2021-01-25 04:27:00,458 | INFO : Save force skipped bert/final_output/token_score/kernel to ./converted/skipped/bert-final_output-token_score-kernel.npy\r\n2021-01-25 04:27:00,461 | INFO : Save force skipped bert/final_output/token_score/kernel to ./converted/skipped/bert-final_output-token_score-kernel.npy\r\n2021-01-25 04:27:00,463 | INFO : Save force skipped bert/relation/bias to ./converted/skipped/bert-relation-bias.npy\r\n2021-01-25 04:27:00,466 | INFO : Save force skipped bert/relation/kernel to ./converted/skipped/bert-relation-kernel.npy\r\n2021-01-25 04:27:00,492 | INFO : Save force skipped bert/relation/kernel to ./converted/skipped/bert-relation-kernel.npy\r\n2021-01-25 04:27:00,519 | INFO : Save force skipped output_bias to ./converted/skipped/output_bias.npy\r\n2021-01-25 04:27:00,521 | INFO : Save force skipped output_bias to ./converted/skipped/output_bias.npy\r\n2021-01-25 04:27:00,523 | INFO : Save force skipped output_weights to ./converted/skipped/output_weights.npy\r\n2021-01-25 04:27:00,525 | INFO : Save force skipped output_weights to ./converted/skipped/output_weights.npy\r\nSave the PyTorch model and the config file to ./converted/\r\nConfiguration saved in ./converted/config.json\r\nModel weights saved in ./converted/pytorch_model.bin\r\n```\r\n\r\nNow I can load the converted model!\r\n\r\nAbout the skipped items (saved as `npy`), I think I can convert them to `nn.Module` by referring to:\r\n\r\nhttps://github.com/huggingface/transformers/blob/9152f16023b59d262b51573714b40325c8e49370/src/transformers/models/bert/modeling_bert.py#L161\r\n\r\nThank you again!",
"Fantastic! Great job, thank you for sharing your progress!",
"I greatly appreciate your help on this issue.\r\nIt's my pleasure if someone who will come across a similar problem can look at this issue and solve the problem.\r\n\r\nI think my `force` convert script is not adequately simple and is a bit hard to apply to the all models, \r\nbut changing from `torch.save()` to `model.save_pretrained()` may help some users.\r\nIf you don’t mind, could you please tell me what do you think about this change?\r\n\r\n```python\r\n # Save pytorch-model\r\n print(\"Save the PyTorch model and the config file to {}\".format(pytorch_dump_dir))\r\n # torch.save(model.state_dict(), pytorch_dump_path)\r\n model.save_pretrained(pytorch_dump_dir)\r\n```\r\n\r\nIt seems for me that creating `config.json` with the converted model `pytorch_model.bin` would be useful, or, for models where the convert command works correctly, is the `config.json` generated elsewhere?\r\n\r\nIf this point can be changed, the main code changes I assume are as follows:\r\n- The save statement shown above.\r\n- The option `--pytorch_dump_output` of convert command will be changed to have `/path/to/directory/` instead of `/path/to/directory/pytorch_model.bin`. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,610 | 1,619 | 1,619 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.1.1
- Platform: Linux-4.15.0-129-generic-x86_64-with-glibc2.10
- Python version: 3.8.3
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): 2.3.1 (True)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
albert, bert, GPT2, XLM: @LysandreJik
## Information
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* Convert TF v1 ckpt to PyTorch
## To reproduce
I tried to convert a TensorFlow checkpoint, but `ModuleAttributeError` occurred.
What I run:
```
****@**** $ transformers-cli convert --model_type bert \
> --tf_checkpoint $MODEL_DIR/model.ckpt \
> --config ****/bert_config.json \
> --pytorch_dump_output $MODEL_DIR/pytorch_model.bin
```
(In this time, `bert_config.json` is in a separate folder, but it corresponds to the `ckpt`.)
Output is:
```
Traceback (most recent call last):
File "/****/.pyenv/versions/anaconda3-2020.07/bin/transformers-cli", line 8, in <module>
sys.exit(main())
File "/****/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/transformers/commands/transformers_cli.py", line 51, in main
service.run()
File "/****/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/transformers/commands/convert.py", line 105, in run
convert_tf_checkpoint_to_pytorch(self._tf_checkpoint, self._config, self._pytorch_dump_output)
File "/****/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/transformers/models/bert/convert_bert_original_tf_checkpoint_to_pytorch.py", line 36, in convert_tf_checkpoint_to_pytorch
load_tf_weights_in_bert(model, config, tf_checkpoint_path)
File "/****/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py", line 155, in load_tf_weights_in_bert
pointer.shape == array.shape
File "/****/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/torch/nn/modules/module.py", line 778, in __getattr__
raise ModuleAttributeError("'{}' object has no attribute '{}'".format(
torch.nn.modules.module.ModuleAttributeError: 'BertEmbeddings' object has no attribute 'shape'
```
## Expected behavior
I think it is not strange that `BertEmbeddings` (`nn.Module`) doesn't have `shape.`
Is it possible to get such an error depending on the original TensorFlow checkpoint?
In such a case, is there any tips to deal with it?
I really appreciate any help you can provide.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9657/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9657/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9656 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9656/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9656/comments | https://api.github.com/repos/huggingface/transformers/issues/9656/events | https://github.com/huggingface/transformers/issues/9656 | 788,223,456 | MDU6SXNzdWU3ODgyMjM0NTY= | 9,656 | "Converting Tensorflow Checkpoints" document has wrong link in v4.2.0+ | {
"login": "forest1988",
"id": 2755894,
"node_id": "MDQ6VXNlcjI3NTU4OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2755894?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/forest1988",
"html_url": "https://github.com/forest1988",
"followers_url": "https://api.github.com/users/forest1988/followers",
"following_url": "https://api.github.com/users/forest1988/following{/other_user}",
"gists_url": "https://api.github.com/users/forest1988/gists{/gist_id}",
"starred_url": "https://api.github.com/users/forest1988/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/forest1988/subscriptions",
"organizations_url": "https://api.github.com/users/forest1988/orgs",
"repos_url": "https://api.github.com/users/forest1988/repos",
"events_url": "https://api.github.com/users/forest1988/events{/privacy}",
"received_events_url": "https://api.github.com/users/forest1988/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! Indeed this link needs to be updated. Do you want to open a PR to fix it?",
"@LysandreJik \r\nThank you for your comment! I'd love to open a PR to fix it.\r\nI would like to open a PR by the end of this week.\r\n\r\nShould I devise a way to change the link destination from the document before and after the change in the folder structure (version 3 to 4)?",
"Excuse me my opening a PR is delayed even though I said: \"by the end of this week\".\r\n\r\nI haven't been able to find the time to do this, but your advice on another issue has helped me understand `convert` better, so I'm going to work on it.\r\n\r\nI'll try to update the documentation on how to explain the differences between version 3 and 4, and I'd be happy to receive your comments in the PR (of course, any advice in advance would be greatly appreciated).\r\n",
"I apologize for the delay in getting the work done later than I said it would be.\r\nI opened PR #9791.\r\nIf you have time, I would appreciate it if you could take a look.",
"Merged, thanks! No worries for the delay!",
"@LysandreJik \r\nThank you for merging and giving me your kind words!"
] | 1,610 | 1,611 | 1,611 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: v4.2.0+
### Who can help
documentation: @sgugger
## Information
I'd like to convert BERT ckpt to PyTorch, and read [Converting Tensorflow Checkpoints](https://huggingface.co/transformers/converting_tensorflow_models.html) document.
It seems that the link to `convert_bert_original_tf_checkpoint_to_pytorch.py` is outdated.
It is linked to https://huggingface.co/transformers/converting_tensorflow_models.html, but `convert_bert_original_tf_checkpoint_to_pytorch.py` is now placed in https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/convert_bert_original_tf_checkpoint_to_pytorch.py (I found the information in https://github.com/huggingface/transformers/issues/9556).
It seems that in https://github.com/huggingface/transformers/pull/9217 the document is updated to use a prefix to get the `release` variable.
However, perhaps the document is not yet changed to match the folder structure in v4?
Sorry if I misunderstand something. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9656/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9656/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9655 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9655/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9655/comments | https://api.github.com/repos/huggingface/transformers/issues/9655/events | https://github.com/huggingface/transformers/issues/9655 | 788,215,851 | MDU6SXNzdWU3ODgyMTU4NTE= | 9,655 | BertTokenizer and encode_plus() | {
"login": "SimplyLucKey",
"id": 35954092,
"node_id": "MDQ6VXNlcjM1OTU0MDky",
"avatar_url": "https://avatars.githubusercontent.com/u/35954092?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SimplyLucKey",
"html_url": "https://github.com/SimplyLucKey",
"followers_url": "https://api.github.com/users/SimplyLucKey/followers",
"following_url": "https://api.github.com/users/SimplyLucKey/following{/other_user}",
"gists_url": "https://api.github.com/users/SimplyLucKey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SimplyLucKey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SimplyLucKey/subscriptions",
"organizations_url": "https://api.github.com/users/SimplyLucKey/orgs",
"repos_url": "https://api.github.com/users/SimplyLucKey/repos",
"events_url": "https://api.github.com/users/SimplyLucKey/events{/privacy}",
"received_events_url": "https://api.github.com/users/SimplyLucKey/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"No it’s still there and still identical. It’s just that you made a typo and typed `encoder_plus` instead of `encode_plus` for what I can tell.\r\n\r\nThough we recommand using just the `__call__` method now which is a shortcut wrapping all the encode method in a single API. You can read more details on the additional features that have been added in v3 and v4 in the doc if you want to simplify your preprocessing.",
"Here: https://huggingface.co/transformers/preprocessing.html",
"> No it’s still there and still identical. It’s just that you made a typo and typed `encoder_plus` instead of `encode_plus` for what I can tell.\r\n> \r\n> Though we recommand using just the `__call__` method now which is a shortcut wrapping all the encode method in a single API. You can read more details on the additional features that have been added in v3 and v4 in the doc if you want to simplify your preprocessing.\r\n\r\nOops sorry I completely missed that. Thank you!",
"long_text = \"This is a very very long text. \" * 300\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-large-uncased\")\r\n# tokenize without truncation\r\ninputs_no_trunc = tokenizer.encode_plus(long_text, add_special_tokens=False, return_tensors='pt')\r\n\r\nI get the following error:\r\n\r\nAttributeError: 'BertTokenizer' object has no attribute 'encode_plus'\r\n\r\nIs their a substitution for this?"
] | 1,610 | 1,685 | 1,611 | NONE | null | I see that from version 2.4.0 I was able to use `encode_plus()` with `BertTokenizer`
However it seems like that is not the case anymore.
`AttributeError: 'BertTokenizer' object has no attribute 'encoder_plus'`
Is there a replacement to `encode_plus`? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9655/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9655/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9654 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9654/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9654/comments | https://api.github.com/repos/huggingface/transformers/issues/9654/events | https://github.com/huggingface/transformers/pull/9654 | 788,206,634 | MDExOlB1bGxSZXF1ZXN0NTU2NzU2NzM1 | 9,654 | Add t5 convert to transformers-cli | {
"login": "acul3",
"id": 56231298,
"node_id": "MDQ6VXNlcjU2MjMxMjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/56231298?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/acul3",
"html_url": "https://github.com/acul3",
"followers_url": "https://api.github.com/users/acul3/followers",
"following_url": "https://api.github.com/users/acul3/following{/other_user}",
"gists_url": "https://api.github.com/users/acul3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/acul3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/acul3/subscriptions",
"organizations_url": "https://api.github.com/users/acul3/orgs",
"repos_url": "https://api.github.com/users/acul3/repos",
"events_url": "https://api.github.com/users/acul3/events{/privacy}",
"received_events_url": "https://api.github.com/users/acul3/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Can we trigger the CI again to make sure all tests are passing? Think you can add an empty git commit with \r\n\r\n```\r\ngit commit --allow-empty -m \"Trigger notification\"\r\n```",
"@patrickvonplaten done it!..it seems failing 1 check only",
"@patrickvonplaten all seems good now...kindly check"
] | 1,610 | 1,611 | 1,611 | CONTRIBUTOR | null | # What does this PR do?
add t5 model convert to transformers-cli
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten @sgugger @LysandreJik
and
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9654/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9654/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9654",
"html_url": "https://github.com/huggingface/transformers/pull/9654",
"diff_url": "https://github.com/huggingface/transformers/pull/9654.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9654.patch",
"merged_at": 1611153268000
} |
https://api.github.com/repos/huggingface/transformers/issues/9653 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9653/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9653/comments | https://api.github.com/repos/huggingface/transformers/issues/9653/events | https://github.com/huggingface/transformers/issues/9653 | 788,159,869 | MDU6SXNzdWU3ODgxNTk4Njk= | 9,653 | AutoModelForMaskedLM not working when using MBartForConditionalGeneration architecture. | {
"login": "moussaKam",
"id": 28675016,
"node_id": "MDQ6VXNlcjI4Njc1MDE2",
"avatar_url": "https://avatars.githubusercontent.com/u/28675016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/moussaKam",
"html_url": "https://github.com/moussaKam",
"followers_url": "https://api.github.com/users/moussaKam/followers",
"following_url": "https://api.github.com/users/moussaKam/following{/other_user}",
"gists_url": "https://api.github.com/users/moussaKam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/moussaKam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/moussaKam/subscriptions",
"organizations_url": "https://api.github.com/users/moussaKam/orgs",
"repos_url": "https://api.github.com/users/moussaKam/repos",
"events_url": "https://api.github.com/users/moussaKam/events{/privacy}",
"received_events_url": "https://api.github.com/users/moussaKam/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @moussaKam, \r\n\r\nThanks for your issue! My opinion here is the following:\r\n\r\n- `MBartForConditionalGeneration` should not work with `AutoModelForMaskedLM`, but only with `AutoModelForSeq2SeqLM` -> it's not a Bert-like autoencoding model, but an encoder-decoder model.\r\n- You are completely right in that `MBartForConditionalGeneration` should have `MBartForConditionalGeneration` in its config and not `Bart...`. This should however not make a difference when loading the model with `AutoModelForSeq2SeqLM.from_pretrained(...)` -> I'll change that!\r\n\r\n@LysandreJik what do you think?",
"Hi @moussaKam, I agree with @patrickvonplaten that `MBartForConditionalGeneration` should not work with `AutoModelForMaskedLM` but only with `AutoModelForSeq2SeqLM`.\r\n\r\nI can confirm that \r\n```py\r\n>>> from transformers import AutoModelForSeq2SeqLM\r\n>>> model = AutoModelForSeq2SeqLM.from_pretrained(\"moussaKam/barthez\")\r\n```\r\nworks correctly.\r\n\r\nAlso agree with you that the configuration looks better thanks to [huggingface.co#88467f](https://huggingface.co/facebook/mbart-large-cc25/commit/88467fef84ba338740dc562dec3a105c2b14de9f)!",
"Yes @LysandreJik @patrickvonplaten you are completely right, however the [inference API](https://huggingface.co/moussaKam/barthez?text=Paris+est+la+%3Cmask%3E+de+la+France.) is using `AutoModelForMaskedLM` for some reason, and returning the following error:\r\n```\r\n⚠️ Unrecognized configuration class for this kind of AutoModel: AutoModelForMaskedLM. Model type should be one of LayoutLMConfig, DistilBertConfig, AlbertConfig, BartConfig, CamembertConfig, XLMRobertaConfig, LongformerConfig, RobertaConfig, SqueezeBertConfig, BertConfig, MobileBertConfig, FlaubertConfig, XLMConfig, ElectraConfig, ReformerConfig, FunnelConfig, MPNetConfig, TapasConfig.\r\n```\r\n\r\nIs there anyway we can fix this issue?",
"That is problematic, indeed. Let me check what's going on.",
"Yes, same problem when using `pipeline`.\r\n\r\n```python\r\nfrom transformers import pipeline\r\n\r\npbase = pipeline(task=\"fill-mask\", model=\"moussaKam/barthez\")\r\nsrc_text = [\"Paris est la capitale de la <mask>\"]\r\nresults = [x[\"token_str\"] for x in pbase(src_text)]\r\n```\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n<ipython-input-12-399795ed06d8> in <module>\r\n 1 from transformers import pipeline\r\n 2 \r\n----> 3 pbase = pipeline(task=\"fill-mask\", model=\"moussaKam/barthez\")\r\n 4 src_text = [\"Paris est la capitale de la <mask>\"]\r\n 5 results = [x[\"token_str\"] for x in pbase(src_text)]\r\n\r\n~/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/pipelines/__init__.py in pipeline(task, model, config, tokenizer, framework, revision, use_fast, **kwargs)\r\n 403 )\r\n 404 \r\n--> 405 model = model_class.from_pretrained(model, config=config, revision=revision, **model_kwargs)\r\n 406 if task == \"translation\" and model.config.task_specific_params:\r\n 407 for key in model.config.task_specific_params:\r\n\r\n~/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/models/auto/modeling_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)\r\n 1123 pretrained_model_name_or_path, *model_args, config=config, **kwargs\r\n 1124 )\r\n-> 1125 raise ValueError(\r\n 1126 \"Unrecognized configuration class {} for this kind of AutoModel: {}.\\n\"\r\n 1127 \"Model type should be one of {}.\".format(\r\n\r\nValueError: Unrecognized configuration class <class 'transformers.models.mbart.configuration_mbart.MBartConfig'> for this kind of AutoModel: AutoModelForMaskedLM.\r\nModel type should be one of LayoutLMConfig, DistilBertConfig, AlbertConfig, BartConfig, CamembertConfig, XLMRobertaConfig, LongformerConfig, RobertaConfig, SqueezeBertConfig, BertConfig, MobileBertConfig, FlaubertConfig, XLMConfig, ElectraConfig, ReformerConfig, FunnelConfig, MPNetConfig, TapasConfig.\r\n```",
"I see - thanks for clarifying @moussaKam . The PR attached above should solve the problem :-) "
] | 1,610 | 1,611 | 1,611 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.2.1
- Platform: Linux-4.4.0-197-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten @LysandreJik
## Information
Model I am using (Bert, XLNet ...): BARThez, MBART
## To reproduce
```python
from transformers import (
BarthezTokenizer,
AutoModelForMaskedLM,
MBartForConditionalGeneration
)
barthez_tokenizer = BarthezTokenizer.from_pretrained("moussaKam/barthez")
barthez_model = AutoModelForMaskedLM.from_pretrained("moussaKam/barthez")
```
error:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-11-cef569c55032> in <module>
9
10 barthez_tokenizer = BarthezTokenizer.from_pretrained("moussaKam/barthez")
---> 11 barthez_model = AutoModelForMaskedLM.from_pretrained("moussaKam/barthez")
12
13 input_ids = torch.tensor(
~/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/models/auto/modeling_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
1123 pretrained_model_name_or_path, *model_args, config=config, **kwargs
1124 )
-> 1125 raise ValueError(
1126 "Unrecognized configuration class {} for this kind of AutoModel: {}.\n"
1127 "Model type should be one of {}.".format(
ValueError: Unrecognized configuration class <class 'transformers.models.mbart.configuration_mbart.MBartConfig'> for this kind of AutoModel: AutoModelForMaskedLM.
Model type should be one of LayoutLMConfig, DistilBertConfig, AlbertConfig, BartConfig, CamembertConfig, XLMRobertaConfig, LongformerConfig, RobertaConfig, SqueezeBertConfig, BertConfig, MobileBertConfig, FlaubertConfig, XLMConfig, ElectraConfig, ReformerConfig, FunnelConfig, MPNetConfig, TapasConfig.
```
The model works as expected when using `MBartForConditionalGeneration` instead of `AutoModelForMaskedLM`.
After checking I see that the public model [MBart](https://huggingface.co/facebook/mbart-large-cc25/blob/main/config.json) itself is using `BartForConditionalGeneration` as default architecture, is that normal? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9653/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9653/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9652 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9652/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9652/comments | https://api.github.com/repos/huggingface/transformers/issues/9652/events | https://github.com/huggingface/transformers/pull/9652 | 788,137,431 | MDExOlB1bGxSZXF1ZXN0NTU2Njk4MzM2 | 9,652 | Update integrations.py | {
"login": "max-yue",
"id": 13486398,
"node_id": "MDQ6VXNlcjEzNDg2Mzk4",
"avatar_url": "https://avatars.githubusercontent.com/u/13486398?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/max-yue",
"html_url": "https://github.com/max-yue",
"followers_url": "https://api.github.com/users/max-yue/followers",
"following_url": "https://api.github.com/users/max-yue/following{/other_user}",
"gists_url": "https://api.github.com/users/max-yue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/max-yue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/max-yue/subscriptions",
"organizations_url": "https://api.github.com/users/max-yue/orgs",
"repos_url": "https://api.github.com/users/max-yue/repos",
"events_url": "https://api.github.com/users/max-yue/events{/privacy}",
"received_events_url": "https://api.github.com/users/max-yue/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,610 | 1,611 | 1,611 | CONTRIBUTOR | null | File "/share/apps/anaconda3/envs/my_env/lib/python3.7/site-packages/transformers/integrations.py", line 419, in __init__
self._SummaryWriter = SummaryWriter
UnboundLocalError: local variable 'SummaryWriter' referenced before assignment
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9652/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9652/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9652",
"html_url": "https://github.com/huggingface/transformers/pull/9652",
"diff_url": "https://github.com/huggingface/transformers/pull/9652.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9652.patch",
"merged_at": 1611074390000
} |
https://api.github.com/repos/huggingface/transformers/issues/9651 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9651/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9651/comments | https://api.github.com/repos/huggingface/transformers/issues/9651/events | https://github.com/huggingface/transformers/issues/9651 | 788,125,505 | MDU6SXNzdWU3ODgxMjU1MDU= | 9,651 | RAG Fine Tuning | {
"login": "Shree-Lakshmi-G-Prakash",
"id": 30719723,
"node_id": "MDQ6VXNlcjMwNzE5NzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/30719723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shree-Lakshmi-G-Prakash",
"html_url": "https://github.com/Shree-Lakshmi-G-Prakash",
"followers_url": "https://api.github.com/users/Shree-Lakshmi-G-Prakash/followers",
"following_url": "https://api.github.com/users/Shree-Lakshmi-G-Prakash/following{/other_user}",
"gists_url": "https://api.github.com/users/Shree-Lakshmi-G-Prakash/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Shree-Lakshmi-G-Prakash/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shree-Lakshmi-G-Prakash/subscriptions",
"organizations_url": "https://api.github.com/users/Shree-Lakshmi-G-Prakash/orgs",
"repos_url": "https://api.github.com/users/Shree-Lakshmi-G-Prakash/repos",
"events_url": "https://api.github.com/users/Shree-Lakshmi-G-Prakash/events{/privacy}",
"received_events_url": "https://api.github.com/users/Shree-Lakshmi-G-Prakash/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It is already [here](https://github.com/huggingface/transformers/tree/master/examples/research_projects/rag) .",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,610 | 1,619 | 1,619 | NONE | null | How do we train RAG mode with custom data set.
Can we have detailed document on this.
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9651/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9651/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9650 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9650/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9650/comments | https://api.github.com/repos/huggingface/transformers/issues/9650/events | https://github.com/huggingface/transformers/issues/9650 | 787,909,993 | MDU6SXNzdWU3ODc5MDk5OTM= | 9,650 | Error w/Transformers 4.2.0 and TF Nightly | {
"login": "ANarayan",
"id": 5660075,
"node_id": "MDQ6VXNlcjU2NjAwNzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5660075?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ANarayan",
"html_url": "https://github.com/ANarayan",
"followers_url": "https://api.github.com/users/ANarayan/followers",
"following_url": "https://api.github.com/users/ANarayan/following{/other_user}",
"gists_url": "https://api.github.com/users/ANarayan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ANarayan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ANarayan/subscriptions",
"organizations_url": "https://api.github.com/users/ANarayan/orgs",
"repos_url": "https://api.github.com/users/ANarayan/repos",
"events_url": "https://api.github.com/users/ANarayan/events{/privacy}",
"received_events_url": "https://api.github.com/users/ANarayan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello!\r\n\r\nPlease update to v4.2.1",
"This worked, thanks!"
] | 1,610 | 1,611 | 1,611 | NONE | null | @jplu I am running into issues when running transformers w/tf-nightly.
I get the error when I am trying to load the TFDistilBERT model:
`model = TFDistilBertModel.from_pretrained('distilbert-base-uncased')`
this is the error message:
```
ImportError:
TFDistilBertModel requires the TensorFlow library but it was not found in your environment. Checkout the instructions on the
installation page: https://www.tensorflow.org/install and follow the ones that match your environment.
```
I came across this bug when running a CI test for Ludwig. I think many projects use tf-nightly in their CIs tests to make sure that the integrations are future proof! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9650/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9650/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9649 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9649/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9649/comments | https://api.github.com/repos/huggingface/transformers/issues/9649/events | https://github.com/huggingface/transformers/issues/9649 | 787,856,954 | MDU6SXNzdWU3ODc4NTY5NTQ= | 9,649 | Does the latest huggingface-transformers version work with tokenizers==0.10.0? | {
"login": "unclebob7",
"id": 29818505,
"node_id": "MDQ6VXNlcjI5ODE4NTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/29818505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/unclebob7",
"html_url": "https://github.com/unclebob7",
"followers_url": "https://api.github.com/users/unclebob7/followers",
"following_url": "https://api.github.com/users/unclebob7/following{/other_user}",
"gists_url": "https://api.github.com/users/unclebob7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/unclebob7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/unclebob7/subscriptions",
"organizations_url": "https://api.github.com/users/unclebob7/orgs",
"repos_url": "https://api.github.com/users/unclebob7/repos",
"events_url": "https://api.github.com/users/unclebob7/events{/privacy}",
"received_events_url": "https://api.github.com/users/unclebob7/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! We're [still pinned to 0.9.4](https://github.com/huggingface/transformers/blob/master/setup.py#L134). We'll pass to `0.10.0` soon.",
"@LysandreJik , if possible, could you provide your best guess as to when this requirement will be updated? I'm currently need to train a WordLevel Tokenizer, as well as use a transformer's model in the same process. I've broken the process up into two python files, which I run separately with different python environments, but it would be nice to have the full process in one file using one environment. Thanks!",
"@n1t0 tells me it should be ready sometimes next week!"
] | 1,610 | 1,612 | 1,612 | NONE | null | ## Environment info
- `transformers` version: 4.3.0.dev0
- `tokenizers` version: 0.10.0

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9649/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9649/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9648 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9648/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9648/comments | https://api.github.com/repos/huggingface/transformers/issues/9648/events | https://github.com/huggingface/transformers/issues/9648 | 787,822,136 | MDU6SXNzdWU3ODc4MjIxMzY= | 9,648 | Easier perplexity computation | {
"login": "uditarora",
"id": 6642146,
"node_id": "MDQ6VXNlcjY2NDIxNDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/6642146?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/uditarora",
"html_url": "https://github.com/uditarora",
"followers_url": "https://api.github.com/users/uditarora/followers",
"following_url": "https://api.github.com/users/uditarora/following{/other_user}",
"gists_url": "https://api.github.com/users/uditarora/gists{/gist_id}",
"starred_url": "https://api.github.com/users/uditarora/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/uditarora/subscriptions",
"organizations_url": "https://api.github.com/users/uditarora/orgs",
"repos_url": "https://api.github.com/users/uditarora/repos",
"events_url": "https://api.github.com/users/uditarora/events{/privacy}",
"received_events_url": "https://api.github.com/users/uditarora/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"Hi @uditarora! That would be a nice addition, do you want to open a PR to add a batched computation to the documentation?",
"Sure! I can try to create one.\r\n\r\nMight take me a couple of weeks before I can get started on it though, due to prior commitments.",
"Hi all. Is this issue still open? I like to contribute and collaborate.\r\n\r\nHere's my take.\r\n\r\n```python3\r\nimport torch\r\nimport torch.nn.functional as F\r\nfrom tqdm import tqdm\r\nfrom transformers import GPT2LMHeadModel, GPT2TokenizerFast\r\nfrom datasets import load_dataset\r\n\r\n\r\ndef batched_perplexity(model, dataset, tokenizer, batch_size, stride):\r\n device = model.device\r\n encodings = tokenizer(\"\\n\\n\".join(dataset[\"text\"]), return_tensors=\"pt\")\r\n text_len = encodings.input_ids.size(1)\r\n lls = []\r\n\r\n for i in tqdm(range(0, text_len, batch_size * stride)):\r\n begin_locs, end_locs, trg_lens = [], [], []\r\n for j in range(batch_size):\r\n j = i + j * stride\r\n if j >= text_len:\r\n break\r\n begin_loc = max(j + stride - max_len, 0)\r\n end_loc = min(j + stride, text_len)\r\n trg_len = end_loc - j # may be different from stride on last loop\r\n\r\n begin_locs.append(begin_loc)\r\n end_locs.append(end_loc)\r\n trg_lens.append(trg_len)\r\n\r\n input_ids = [encodings.input_ids[:, b:e] for b, e in zip(begin_locs, end_locs)]\r\n target_end_locs = [sen.size(-1) for sen in input_ids]\r\n input_ids = [\r\n F.pad(sen, (0, max_len - sen.size(-1)), \"constant\", 0) for sen in input_ids\r\n ] # we dont need attention mask as long as these padded token is not involved in loss calculation\r\n input_ids = torch.stack(input_ids, dim=1).squeeze(0).to(device)\r\n\r\n target_ids = torch.ones_like(input_ids) * -100 # -100 is the default ingore_index value in torch.nn.CrossEntropyLoss\r\n for i, (b, e) in enumerate(zip(trg_lens, target_end_locs)):\r\n labels = input_ids[i, -b:e].clone()\r\n target_ids[i, -b:e] = labels\r\n\r\n with torch.no_grad():\r\n outputs = model(input_ids, labels=target_ids)\r\n log_likelihood = outputs[\"loss\"] * sum(trg_lens)\r\n\r\n lls.append(log_likelihood)\r\n\r\n ppl = torch.exp(sum(torch.stack(lls) / end_locs[-1]))\r\n return ppl\r\n\r\n\r\nif __name__ == \"__main__\":\r\n device = \"cuda\"\r\n model_id = \"distilgpt2\"\r\n model = GPT2LMHeadModel.from_pretrained(model_id).to(device)\r\n tokenizer = GPT2TokenizerFast.from_pretrained(model_id)\r\n max_len = model.config.n_positions\r\n stride = 512\r\n batch_size = 16\r\n\r\n test = load_dataset(\"wikitext\", \"wikitext-2-raw-v1\", split=\"test[:100%]\")\r\n ppl = batched_perplexity(model, test, tokenizer, batch_size, stride)\r\n print(f\"--------------{ppl=}-------------\")\r\n```",
"Hello! is this issue still open? Did you test that the batched example above gives the same value for GPT2 as the documentation?",
"I hope this is correct? \r\n```\r\nmodel = GPT2LMHeadModel.from_pretrained(\"/content/drive/MyDrive/Colab Notebooks/best_model_wiki\")\r\ndef calculate_perplexity(model, dataloader, device):\r\n model.eval() # make sure the model is in evaluation mode\r\n total_loss = 0\r\n total_examples = 0\r\n\r\n with torch.no_grad(): # we don't need to calculate gradients\r\n for batch in dataloader:\r\n inputs, labels = batch['input_ids'].to(device), batch['labels'].to(device)\r\n outputs = model(input_ids=inputs, labels=labels)\r\n total_loss += outputs.loss.item() * inputs.size(0)\r\n total_examples += inputs.size(0)\r\n\r\n average_loss = total_loss / total_examples\r\n perplexity = torch.exp(torch.tensor(average_loss))\r\n return perplexity\r\nfrom torch.utils.data import DataLoader\r\n\r\n# Assuming you have a GPU available\r\ndevice = \"cuda\" if torch.cuda.is_available() else \"cpu\"\r\nmodel.to(device)\r\n\r\n# Create the DataLoader for our test dataset\r\ntest_dataloader = DataLoader(test_dataset, batch_size=4, collate_fn=data_collator)\r\n\r\n# Calculate the perplexity\r\ntest_perplexity = calculate_perplexity(model, test_dataloader, device)\r\nprint(f\"Test Perplexity: {test_perplexity}\")\r\n```"
] | 1,610 | 1,699 | null | NONE | null | # 🚀 Feature request
The docs provide a method to evaluate perplexity for a GPT-2 model, one example at a time (https://huggingface.co/transformers/perplexity.html). However this can potentially be included in the library with the computation being done in a batched manner.
## Motivation
This would make it easier and faster for people to evaluate their language models in terms of perplexity.
If not a solution integrated in the library, the example given in the docs can be updated to do computation in a batched manner for speed.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9648/reactions",
"total_count": 7,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9648/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9647 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9647/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9647/comments | https://api.github.com/repos/huggingface/transformers/issues/9647/events | https://github.com/huggingface/transformers/issues/9647 | 787,810,215 | MDU6SXNzdWU3ODc4MTAyMTU= | 9,647 | Training Bert2Bert with EncoderDecoderModel and Seq2SeqTrainer results with Cuda OOM | {
"login": "segef",
"id": 27014781,
"node_id": "MDQ6VXNlcjI3MDE0Nzgx",
"avatar_url": "https://avatars.githubusercontent.com/u/27014781?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/segef",
"html_url": "https://github.com/segef",
"followers_url": "https://api.github.com/users/segef/followers",
"following_url": "https://api.github.com/users/segef/following{/other_user}",
"gists_url": "https://api.github.com/users/segef/gists{/gist_id}",
"starred_url": "https://api.github.com/users/segef/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/segef/subscriptions",
"organizations_url": "https://api.github.com/users/segef/orgs",
"repos_url": "https://api.github.com/users/segef/repos",
"events_url": "https://api.github.com/users/segef/events{/privacy}",
"received_events_url": "https://api.github.com/users/segef/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! What is your machine? When you run the script, at which point does it fail? Right off the bat, or after a few sequences have been processed?",
"I have tried it on my local GTX1650 and also on a 16gb T100. They both fail during processing the first sequence. It is not always at the same line but mostly during ``forward`` of ``SelfAttention`` module of the ``Bert``. I also decreased the input sizes while processing the data with the tokenizer. It manages to process one sequence but then it fails with OOM again while processing the second sequence. Additionally, I tried training it directly on [colab](https://colab.research.google.com/drive/1Ekd5pUeCX7VOrMx94_czTkwNtLN32Uyu?usp=sharing#scrollTo=7zdm50ZotZqb), it fails with a OOM there, too.",
"Not sure how and why but the training started working on T100, even though I haven't really changed anything. The GPU might be just overloaded back then. I will close this issue."
] | 1,610 | 1,611 | 1,611 | NONE | null | Hi,
I am trying to train a Bert2Bert model for text summarization. I followed the exact steps in [BERT2BERT for CNN/Dailymail](https://colab.research.google.com/drive/1Ekd5pUeCX7VOrMx94_czTkwNtLN32Uyu?usp=sharing#scrollTo=7zdm50ZotZqb). Only things that I changed are the training arguments and metrics. Additionally I have also tried to replace [seq2seq_trainer](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/seq2seq_trainer.py) with Seq2SeqTrainer from the package itself, the result was the same. I am using ``bert-base-uncased`` model for BERT and CNN/Dailymail as dataset (just like it was introduced in the [colab](https://colab.research.google.com/drive/1Ekd5pUeCX7VOrMx94_czTkwNtLN32Uyu?usp=sharing#scrollTo=7zdm50ZotZqb)).
from transformers Seq2SeqTrainingArguments, Seq2SeqTrainer
def compute_metrics(pred):
labels_ids = pred.label_ids
pred_ids = pred.predictions
# all unnecessary tokens are removed
pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True)
labels_ids[labels_ids == -100] = tokenizer.pad_token_id
label_str = tokenizer.batch_decode(labels_ids, skip_special_tokens=True)
rouge_output = rouge.compute(predictions=pred_str, references=label_str, rouge_types=["rouge1", "rouge2"])
rouge1 = rouge_output["rouge1"].mid
rouge2 = rouge_output["rouge2"].mid
return {
"rouge1_precision": round(rouge1.precision, 4),
"rouge1_recall": round(rouge1.recall, 4),
"rouge1_fmeasure": round(rouge1.fmeasure, 4),
"rouge2_precision": round(rouge2.precision, 4),
"rouge2_recall": round(rouge2.recall, 4),
"rouge2_fmeasure": round(rouge2.fmeasure, 4),
}
training_args = Seq2SeqTrainingArguments(
output_dir=output_folder,
logging_dir=log_folder,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
predict_with_generate=True,
evaluation_strategy=EvaluationStrategy.STEPS,
do_train=True,
do_eval=True,
logging_steps=1000, # set to 1000 for full training
load_best_model_at_end=True,
metric_for_best_model='rouge1_fmeasure',
eval_steps=8000, # set to 8000 for full training
warmup_steps=2000, # set to 2000 for full training
overwrite_output_dir=True,
save_total_limit=2,
fp16=True,
)
trainer = Seq2SeqTrainer(
model=bert2bert,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=train_data,
eval_dataset=val_data,
)
Even with ``batch_size=1``, I am getting the OOM. It seems like the cuda does not free any memory at all.
versions of my ``transformers`` and ``torch`` are as followed.
`
transformers 4.2.0, torch 1.7.1+cu110
`
Can you help me with this issue? What do you think the issue might be? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9647/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/9647/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9646 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9646/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9646/comments | https://api.github.com/repos/huggingface/transformers/issues/9646/events | https://github.com/huggingface/transformers/issues/9646 | 787,800,660 | MDU6SXNzdWU3ODc4MDA2NjA= | 9,646 | RAG : Adding end to end training for the retriever (both question encoder and doc encoder) | {
"login": "shamanez",
"id": 16892570,
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shamanez",
"html_url": "https://github.com/shamanez",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"repos_url": "https://api.github.com/users/shamanez/repos",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | closed | false | null | [] | [
" @lhoestq\r\n",
"Interesting :) I don't think I'll be able to work on this in the short term but if anyone wants to give it a try maybe I can help with some indications",
"@lhoestq \r\n\r\nI kind of figured out a way to do this. I need a little clarification from you regarding the distributed retriever. As mentioned in this [line](https://github.com/huggingface/transformers/blob/master/examples/research_projects/rag/finetune_rag.py#L84), we use **CustomAcc** class to load knowledgebase and load faiss index as a separate process. \r\n\r\nI want to re-execute the above-mentioned process after several training steps. Let's say 1000.\r\n\r\nwith Pytorch lighting,\r\n\r\n\r\n\r\n def optimizer_step(self, epoch_nb, batch_nb, optimizer, optimizer_i, opt_closure):\r\n if self.trainer.global_step < 500:\r\n ****** run init_ddp_connection functiontion inside CustomAccel class*****\r\n 1. Reinitialize knowledgebase dataset\r\n 2. Relaod faiss index\r\n\r\n \r\n\r\n\r\n",
"Hi @shamanez You can reload an updated index during train time this way:\r\n1. recompute all the embeddings of your knowledge source using the context encoder (costly, depending on your knowledge source size)\r\n2. recreate the FAISS index (which can also be costly)\r\n\r\nThe REALM model does this kind of back and forth between training and indexing, you may want to check out how they did that in the paper.\r\n\r\nI think one approach would be to extend the [RayRetriever](https://github.com/huggingface/transformers/blob/master/examples/research_projects/rag/distributed_ray_retriever.py) since you can define several workers depending on what you want to do (query the index, compute the embeddings or update the index). It's something that I feel is more natural to do with Ray than with pytorch distributed.",
"Ok. I was thinking the same steps looking at the REALM paper. In their code implementation, they run three separate processes and communicate in between processes when they need to compute new embedding, load them and finally feed them to the reader model. It only works with a single GPU.\r\n\r\nAnyways I get when using RAG with PyTorch distributed, the loading operation is done before the trainer. So I was thinking can we execute that within the training loop. Anyways I get what you say. \r\n\r\n\r\n@amogkam can you help with this?\r\n\r\n\r\n",
"@lhoestq \r\n\r\nI kind of tried to implement this with PyTorch distributed retriever method. So ideally I want to re-load the knowledge-based and rea-load the indexes inside the training step (assuming I have an updated doc-encoder). Here is my implementation. Can you please let me know whether it is correct. \r\n\r\n```\r\ndef training_step(self, batch, batch_idx) -> Dict:\r\n\r\n if not batch_idx==0 and batch_idx%10000==0:\r\n self.model.retriever.re_index()\r\n\r\n```\r\nThe reindex is a simple method inside **distributed_pytorch_retriever.py** that only re-load the dataset and idex in the main process.\r\n\r\n```\r\ndef re_index(self):\r\n # initialize retriever only on the main worker\r\n if self._is_main():\r\n logger.info(\"re initializing the index\")\r\n self.index.re_init_index()\r\n\r\n```\r\n\r\n\r\n\r\n\r\nHere my assumption is, we have already started a separate process with the custom accel. Now we are changing something inside it. \r\n\r\nWhat do u this of it?\r\n\r\n",
"It looks good :) Although I haven't tested things like that so let me know how it goes !\r\n\r\nalso one detail: maybe you wanted to write `batch_idx%10000 == 0` instead of `batch_idx%10000`",
"ah yeah, thanks for the update. So I am on it. I Will update you soon :) ",
"@lhoestq I have another question regarding the **load_dataset** function: \r\n\r\nPrior to starting the DDP process, the code loads the indexed dataset by accessing the saved file in the hard disk with the **load_from_disk** .([this line](https://github.com/huggingface/transformers/blob/641f418e102218c4bf16fcd3124bfebed6217ef6/src/transformers/models/rag/retrieval_rag.py#L397)). \r\n\r\n\r\nDuring the training what if the data file (.arrow files) change? Here, the entire data structure is the same, it is just the values that change. \r\n\r\n\r\nIn this kind of scenario do we have to use the load_dataset function again or it will automatically access the updated file? ",
"You would need to create a new arrow file and load the new arrow file.\r\nIf you overwrite the arrow file that is currently loaded I'm pretty sure things won't get updated properly.",
"yeah, that is what I actually observed. Btw I have implemented the end-to-end case with RAY. Currently doing the final testing. Will do a pull request if it is possible. ",
"This is really cool thanks !"
] | 1,610 | 1,621 | 1,621 | CONTRIBUTOR | null | # 🚀 Feature request
As mentioned in this recent paper [End-to-End Training of Neural Retrievers for Open-Domain Question Answering](https://arxiv.org/abs/2101.00408), we can get better results for QA tasks if we fine-tune the retriever section in an end-to-end manner.
## Paper's method
Fine-tune both doc-encoder and question encoder and update the pre computed index embedding in every 500 steps.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9646/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9646/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9645 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9645/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9645/comments | https://api.github.com/repos/huggingface/transformers/issues/9645/events | https://github.com/huggingface/transformers/issues/9645 | 787,786,418 | MDU6SXNzdWU3ODc3ODY0MTg= | 9,645 | Odd predictions of T5 models in recent versions | {
"login": "danyaljj",
"id": 2441454,
"node_id": "MDQ6VXNlcjI0NDE0NTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2441454?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danyaljj",
"html_url": "https://github.com/danyaljj",
"followers_url": "https://api.github.com/users/danyaljj/followers",
"following_url": "https://api.github.com/users/danyaljj/following{/other_user}",
"gists_url": "https://api.github.com/users/danyaljj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/danyaljj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danyaljj/subscriptions",
"organizations_url": "https://api.github.com/users/danyaljj/orgs",
"repos_url": "https://api.github.com/users/danyaljj/repos",
"events_url": "https://api.github.com/users/danyaljj/events{/privacy}",
"received_events_url": "https://api.github.com/users/danyaljj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello @danyaljj,\r\n\r\nThis is expected actually. Could you change your code as follows to get the previous results:\r\n\r\n```python\r\nfrom transformers import AutoTokenizer, T5ForConditionalGeneration\r\n\r\nmodel_name = \"allenai/unifiedqa-t5-small\" # you can specify the model size here\r\ntokenizer = AutoTokenizer.from_pretrained(model_name)\r\nmodel = T5ForConditionalGeneration.from_pretrained(model_name)\r\n\r\ndef run_model(input_string, **generator_args):\r\n input_ids = tokenizer.encode(input_string, return_tensors=\"pt\")\r\n res = model.generate(input_ids, **generator_args)\r\n return tokenizer.batch_decode(res, skip_special_tokens=True)\r\n\r\nrun_model(\"which is best conductor? \\\\n (a) iron (b) feather\")\r\n```",
"Thanks for the quick reply! The new code works! Thanks! "
] | 1,610 | 1,610 | 1,610 | CONTRIBUTOR | null | We are seeing odd predictions by T5 models ([UnifiedQA models](https://github.com/allenai/unifiedqa/)) when using the recent HF version (4.2.1). Here is the discussion: https://github.com/allenai/unifiedqa/issues/11
### Who can help
@TevenLeScao @patrickvonplaten
## To reproduce
Try running the following script:
```python
from transformers import AutoTokenizer, T5ForConditionalGeneration
model_name = "allenai/unifiedqa-t5-small" # you can specify the model size here
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
return [tokenizer.decode(x) for x in res]
run_model("which is best conductor? \\n (a) iron (b) feather")
```
- For `transformers==4.2.1`, I am getting `['<pad> iron</s>']`, which is not good.
- However, `transformers==3.5.1`and `transformers==3.1.0` give me `['iron']`which is the expected response.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9645/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9645/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9644 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9644/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9644/comments | https://api.github.com/repos/huggingface/transformers/issues/9644/events | https://github.com/huggingface/transformers/issues/9644 | 787,725,458 | MDU6SXNzdWU3ODc3MjU0NTg= | 9,644 | Fail to convert the Funnel Transformer tensorflow version to transformer one when use the official script | {
"login": "RyanHuangNLP",
"id": 49582480,
"node_id": "MDQ6VXNlcjQ5NTgyNDgw",
"avatar_url": "https://avatars.githubusercontent.com/u/49582480?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RyanHuangNLP",
"html_url": "https://github.com/RyanHuangNLP",
"followers_url": "https://api.github.com/users/RyanHuangNLP/followers",
"following_url": "https://api.github.com/users/RyanHuangNLP/following{/other_user}",
"gists_url": "https://api.github.com/users/RyanHuangNLP/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RyanHuangNLP/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RyanHuangNLP/subscriptions",
"organizations_url": "https://api.github.com/users/RyanHuangNLP/orgs",
"repos_url": "https://api.github.com/users/RyanHuangNLP/repos",
"events_url": "https://api.github.com/users/RyanHuangNLP/events{/privacy}",
"received_events_url": "https://api.github.com/users/RyanHuangNLP/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! Could you explain the full procedure? Where did you obtain the Funnel transformer TensorFlow version? Is it a model you trained yourself using another framework? (like this one: https://github.com/laiguokun/Funnel-Transformer)",
"just use the official ones(like this one: https://github.com/laiguokun/Funnel-Transformer) @LysandreJik \r\nthe layer map \"input\" -> \"embedding\", raise error",
"Could you provide the configuration you used, as well as which Funnel Transformer (which identifier? Is it the TensorFlow or the TensorFlow-Full) you tried to convert? Thank you",
"@LysandreJik I was train my funnel with the official code, I think my pretrain tensorflow is Tensorflow-Full with the adam weight. May be I need to transform my pretrain model to the TensorFlow or the TensorFlow-Full one first, then use the convert script to change to the transformer one?",
"I see! Can you try the fix proposed in #9683 and let me know if it fixes your issue?\r\nYou can install it in your environment with:\r\n```\r\npip install git+https://github.com/huggingface/transformers.git@convert_funnel\r\n```\r\n\r\nor if you have a clone of the repository, you can pull it and checkout the `convert_funnel` branch.",
"@LysandreJik Thanks for quickly reply. I will take a try.",
"@LysandreJik It has raise a new error, I cannot `convert_funnel ` branch, I found that it has merge to `master` branch, so I use the `master` branch\r\n\r\nwhen set `base_model=False`\r\n\r\n```\r\nfrom transformers.models.funnel.convert_funnel_original_tf_checkpoint_to_pytorch import convert_tf_checkpoint_to_pytorch\r\n\r\ntf_checkpoint_path = \"xxxx/B6-6-6H768-ELEC-TF_model.ckpt\"\r\n\r\nconfig_file = \"xxxxxx\"\r\n\r\npytorch_dump_path = \"xxxxxx/funnel-base\"\r\n\r\nconvert_tf_checkpoint_to_pytorch(tf_checkpoint_path, config_file, pytorch_dump_path, False)\r\n```\r\n\r\n```\r\n File \"test.py\", line 9, in <module>\r\n convert_tf_checkpoint_to_pytorch(tf_checkpoint_path, config_file, pytorch_dump_path, False)\r\n File \"/root/miniconda3/envs/transformers/lib/python3.7/site-packages/transformers-4.3.0.dev0-py3.7.egg/transformers/models/funnel/convert_funnel_original_tf_checkpoint_to_pytorch.py\", line 36, in convert_tf_checkpoint_to_pytorch\r\n load_tf_weights_in_funnel(model, config, tf_checkpoint_path)\r\n File \"/root/miniconda3/envs/transformers/lib/python3.7/site-packages/transformers-4.3.0.dev0-py3.7.egg/transformers/models/funnel/modeling_funnel.py\", line 136, in load_tf_weights_in_funnel\r\n pointer = pointer.layers[layer_index]\r\n File \"/root/miniconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/container.py\", line 164, in __getitem__\r\n return self._modules[self._get_abs_string_index(idx)]\r\n File \"/root/miniconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/container.py\", line 154, in _get_abs_string_index\r\n raise IndexError('index {} is out of range'.format(idx))\r\nIndexError: index 6 is out of range\r\n```\r\n\r\nwhen set `base_model=True`\r\n\r\n```\r\nfrom transformers.models.funnel.convert_funnel_original_tf_checkpoint_to_pytorch import convert_tf_checkpoint_to_pytorch\r\n\r\ntf_checkpoint_path = \"xxxx/B6-6-6H768-ELEC-TF_model.ckpt\"\r\n\r\nconfig_file = \"xxxxxx\"\r\n\r\npytorch_dump_path = \"xxxxxx/funnel-base\"\r\n\r\nconvert_tf_checkpoint_to_pytorch(tf_checkpoint_path, config_file, pytorch_dump_path, True)\r\n```\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"test.py\", line 9, in <module>\r\n convert_tf_checkpoint_to_pytorch(tf_checkpoint_path, config_file, pytorch_dump_path, True)\r\n File \"/root/miniconda3/envs/transformers/lib/python3.7/site-packages/transformers-4.3.0.dev0-py3.7.egg/transformers/models/funnel/convert_funnel_original_tf_checkpoint_to_pytorch.py\", line 36, in convert_tf_checkpoint_to_pytorch\r\n load_tf_weights_in_funnel(model, config, tf_checkpoint_path)\r\n File \"/root/miniconda3/envs/transformers/lib/python3.7/site-packages/transformers-4.3.0.dev0-py3.7.egg/transformers/models/funnel/modeling_funnel.py\", line 136, in load_tf_weights_in_funnel\r\n pointer = pointer.layers[layer_index]\r\n File \"/root/miniconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 772, in __getattr__\r\n type(self).__name__, name))\r\ntorch.nn.modules.module.ModuleAttributeError: 'FunnelEncoder' object has no attribute 'layers'\r\n```",
"@sgugger do you have an idea of what might be going wrong?",
"@LysandreJik Which config file is using, I use the original full version tensorflow one `net_config.json`\r\n\r\n```\r\n{\r\n \"block_size\": \"6_6_6\",\r\n \"d_embed\": 768,\r\n \"d_head\": 64,\r\n \"d_inner\": 3072,\r\n \"d_model\": 768,\r\n \"decoder_size\": \"2\",\r\n \"dropact\": 0.0,\r\n \"dropatt\": 0.1,\r\n \"dropout\": 0.1,\r\n \"ff_activation\": \"gelu\",\r\n \"init\": \"truncated_normal\",\r\n \"init_range\": 0.1,\r\n \"init_std\": 0.02,\r\n \"n_head\": 12,\r\n \"pool_q_only\": true,\r\n \"pooling_size\": 2,\r\n \"pooling_type\": \"mean\",\r\n \"rel_attn_type\": \"factorized\",\r\n \"separate_cls\": true,\r\n \"vocab_size\": 21128\r\n}\r\n```",
"No you need to convert your configuration first to a proper `FunnelConfig`, that is what the conversion script is expecting.",
"@LysandreJik @sgugger Now the setting is the same, but still raise error, cannot convert the full tensorflow version to transformers ones",
"Like I said before, it works for me. So without more information about the environment, the command you launch and the stack trace, there is really nothing we can do to help.",
"> @LysandreJik Which config file is using, I use the original full version tensorflow one `net_config.json`\r\n> \r\n> ```\r\n> {\r\n> \"block_size\": \"6_6_6\",\r\n> \"d_embed\": 768,\r\n> \"d_head\": 64,\r\n> \"d_inner\": 3072,\r\n> \"d_model\": 768,\r\n> \"decoder_size\": \"2\",\r\n> \"dropact\": 0.0,\r\n> \"dropatt\": 0.1,\r\n> \"dropout\": 0.1,\r\n> \"ff_activation\": \"gelu\",\r\n> \"init\": \"truncated_normal\",\r\n> \"init_range\": 0.1,\r\n> \"init_std\": 0.02,\r\n> \"n_head\": 12,\r\n> \"pool_q_only\": true,\r\n> \"pooling_size\": 2,\r\n> \"pooling_type\": \"mean\",\r\n> \"rel_attn_type\": \"factorized\",\r\n> \"separate_cls\": true,\r\n> \"vocab_size\": 21128\r\n> }\r\n> ```\r\n\r\nI got the same problem as you and I manage to convert the checkpoint by using the config file at the hugging face model hub. If you use 6-6-6 block use this one https://huggingface.co/funnel-transformer/intermediate/raw/main/config.json and change vocab size.",
"@sgugger @LysandreJik I think is the config file problem, I try @NLP33 advise fix the problem",
"@RyanHuangNLP I have asked you before to give us the command your launch, the environment you use and a the content of the config file you are using. There is no point tagging me further on this issue with a vague message if you are not willing to share for those information as I cannot investigate a bug I cannot reproduce.\r\nAs I also said before and @NLP33 indicated, the script only supports config files corresponding to a config created by using `FunnelConfig` from transformers. It does not support the original config files from the original repo."
] | 1,610 | 1,611 | 1,611 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:3.5.1
- Platform:Centos
- Python version:3.7
- PyTorch version (GPU?):1.6.0
- Tensorflow version (GPU?):2.3.2
- Using GPU in script?:yes
- Using distributed or parallel set-up in script?:yes
## Information
Model I am using (Bert, XLNet ...):Funnel Transformer
## To reproduce
Steps to reproduce the behavior:
1.use the script `convert_funnel_original_tf_checkpoint_to_pytorch.py`@sgugger @LysandreJik
raise error
```
Traceback (most recent call last):
File "run_pretraining.py", line 158, in <module>
convert_tf_checkpoint_to_pytorch(args.tf_checkpoint_path, args.config_file, args.pytorch_dump_path)
File "run_pretraining.py", line 40, in convert_tf_checkpoint_to_pytorch
load_tf_weights_in_funnel(model, config, tf_checkpoint_path)
File "run_pretraining.py", line 122, in load_tf_weights_in_funnel
pointer = getattr(pointer, _layer_map[m_name])
File "/root/miniconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 772, in __getattr__
type(self).__name__, name))
torch.nn.modules.module.ModuleAttributeError: 'FunnelForPreTraining' object has no attribute 'embeddings'
```
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9644/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9644/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9643 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9643/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9643/comments | https://api.github.com/repos/huggingface/transformers/issues/9643/events | https://github.com/huggingface/transformers/issues/9643 | 787,718,756 | MDU6SXNzdWU3ODc3MTg3NTY= | 9,643 | [Feature Request] Add 3D attention mask for T5Model | {
"login": "yongyi-wu",
"id": 60588626,
"node_id": "MDQ6VXNlcjYwNTg4NjI2",
"avatar_url": "https://avatars.githubusercontent.com/u/60588626?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yongyi-wu",
"html_url": "https://github.com/yongyi-wu",
"followers_url": "https://api.github.com/users/yongyi-wu/followers",
"following_url": "https://api.github.com/users/yongyi-wu/following{/other_user}",
"gists_url": "https://api.github.com/users/yongyi-wu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yongyi-wu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yongyi-wu/subscriptions",
"organizations_url": "https://api.github.com/users/yongyi-wu/orgs",
"repos_url": "https://api.github.com/users/yongyi-wu/repos",
"events_url": "https://api.github.com/users/yongyi-wu/events{/privacy}",
"received_events_url": "https://api.github.com/users/yongyi-wu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
},
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hello @yongyi-wu, \r\n\r\nyes you're right. T5 is not yet fully compatible with 3D attention_mask input. Currently, I won't find enough time to work on adding this feature, but I'll post it under \"Community projects\" in case someone from the community is interested in giving it a shot.\r\n\r\nAlso feel free to open a PR yourself, if you want to try :-) ",
"Hi, I am new here but would like to give this a shot. Because it is my first issue I could use some direction if on how to tackle this, if this is okay with you. \r\n\r\n- Would you prefer the sanity check or an improved `get_extended_attention_mask()` method? \r\n- Do you know of an already existing implementations with 3D attention mask as a reference. \r\n- Where would you like to see the solution implemented. ",
"Hi, I'm a newbie in transformers and trying to make customized 3D attention mask with T5ConditionalGeneration.\r\nI was googling 3D attention mask for T5 model and found it in here.\r\n\r\nFrom what I understood above is that, I should add one more `encoder_sequence_length` variable at the end of line `encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length)` in T5Stack forward function if I want to build 3D attention?\r\n\r\nDo I need to edit anything else?\r\n\r\n@lexhuismans \r\n\r\nThanks!"
] | 1,610 | 1,672 | 1,620 | NONE | null | ## Environment info
- `transformers` version: 4.2.1
- Platform: Linux
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cuda11.0
### Who can help
T5: @patrickvonplaten
## Information
The `get_extended_attention_mask()` does not sufficiently address the case with 3D attention mask. The problem emerges for T5Model as `input_ids` and `decoder_input_ids` are of different length and the `attention_mask` is of shape [Batch_size, Seq_length, Seq_length]. The decoder uses `attention_mask` directly for `encoder_attention_mask` in cross-attention, which is of incorrect shape and the error message does give any information about why it happens.
## To reproduce
As described above. I can add code later if needed.
## Expected behavior
I propose to add a sanity check for attention masks or improve the `get_extended_attention_mask()` method.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9643/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9643/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9642 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9642/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9642/comments | https://api.github.com/repos/huggingface/transformers/issues/9642/events | https://github.com/huggingface/transformers/issues/9642 | 787,716,988 | MDU6SXNzdWU3ODc3MTY5ODg= | 9,642 | Multi-GPU inference with Tensorflow backend | {
"login": "fingoldo",
"id": 16359856,
"node_id": "MDQ6VXNlcjE2MzU5ODU2",
"avatar_url": "https://avatars.githubusercontent.com/u/16359856?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fingoldo",
"html_url": "https://github.com/fingoldo",
"followers_url": "https://api.github.com/users/fingoldo/followers",
"following_url": "https://api.github.com/users/fingoldo/following{/other_user}",
"gists_url": "https://api.github.com/users/fingoldo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fingoldo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fingoldo/subscriptions",
"organizations_url": "https://api.github.com/users/fingoldo/orgs",
"repos_url": "https://api.github.com/users/fingoldo/repos",
"events_url": "https://api.github.com/users/fingoldo/events{/privacy}",
"received_events_url": "https://api.github.com/users/fingoldo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hello!!\r\n\r\nCan you please share the code you are using as for me it works as expected:\r\n```python\r\nfrom transformers import TFBertModel, BertTokenizer\r\nimport tensorflow as tf\r\n\r\nfirst_strategy = tf.distribute.MirroredStrategy()\r\n\r\nwith first_strategy.scope():\r\n\tmodel = TFBertModel.from_pretrained(\"bert-base-cased\")\r\n\tmodel.save_pretrained(\"my_model\")\r\n\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-cased\")\r\nanother_strategy = tf.distribute.OneDeviceStrategy(\"/cpu:0\")\r\n\r\nwith another_strategy.scope():\r\n restored_keras_model_ds = TFBertModel.from_pretrained(\"my_model\")\r\n\r\n inputs = tokenizer(\"Hello world.\", return_tensors=\"tf\")\r\n predict_dataset = tf.data.Dataset.from_tensor_slices(inputs).batch(1)\r\n dist_predict_dataset = another_strategy.experimental_distribute_dataset(predict_dataset)\r\n for batch in dist_predict_dataset:\r\n \tanother_strategy.run(restored_keras_model_ds, args=(batch,))\r\n```",
"Thank you guys so much for the response! It was not obvious to use save_pretrained under the scope. Your example runs successfully, however on a 8 GPUs machine I observe (with bigh enough input list, of course) a weird pattern when maximum 2 GPUs are busy, and the rest are simply stale. Then after some seconds new pair of GPUs become active and rest are [waiting.](https://pasteboard.co/JKmszWl.png) It happens no matter what strategy I try, MirroredStrategy or MultiWorkerMirroredStrategy. @jplu what strategy would you recommend to utilize all 8 GPUs?",
"This simply means that TF needs no more than 2 GPUs to run your inference.",
"But it's taking more than 40 seconds to run it. It definitely needs to utilize more ...\r\n\r\n> \r\n> 2021-01-19 12:37:55,915 - INFO - <ipython-input-108-8a0316d72a4a> - root - infer_natively - line: 4 - Tokenizing dataset of length 100000...\r\n> 2021-01-19 12:38:07,630 - INFO - <ipython-input-108-8a0316d72a4a> - root - infer_natively - line: 9 - Converting dataset to tf dataset using batch size 244...\r\n> 2021-01-19 12:38:07,634 - INFO - <ipython-input-108-8a0316d72a4a> - root - infer_natively - line: 12 - Distributing tf dataset across replicas...\r\n> 2021-01-19 12:38:07,714 - INFO - <ipython-input-108-8a0316d72a4a> - root - infer_natively - line: 16 - Inferencing using 8 GPUs\r\n> 2021-01-19 12:39:38,318 - INFO - <ipython-input-108-8a0316d72a4a> - root - infer_natively - line: 36 - Done. nbatches processed: 26",
"Tensorflow doesn't take the time as reference but the size. If your data can fit on 2 GPUs then it uses only 2. I suggest you to read this to better understand how it works. https://www.tensorflow.org/guide/gpu",
"> as reference but the size. If your data can fit on 2 GPUs then it uses only 2. I suggest you to \r\n\r\nFollowing this link, I was not able to find any mentioning of when tf can select lower number of GPUs to run inference on, depending on data size. I tried with a million sentences and I'm still observing that pattern when only 2 GPUs are heavily loaded, and the rest has 0% utilization. and that pair of active GPUs changes randomly as the time goes. So something is definitely wrong with implementation. I was asking tf \"please use all devices for this huge workload\", and you are saying it just like \"it can be done using 2 GPUs dude so i'm using 2, I don't care how long you gonna wait for the result\" ? :-)",
"@jplu so if you know how to make it use all 8 GPUs in my particular case for 1 million of input sentences please advise, it would solve the issue completely.",
"Really sorry I don't know what to tell you more, if you have mostly TF related questions I suggest you to open an issue on the TF github repo.",
"Hi,\r\n\r\nI'm having similar issues with inference when using multi-gpu, the `predict` function returns empty output despite being actually processing the input. \r\n\r\n```python\r\nfrom transformers import BertTokenizerFast, TFBertForSequenceClassification\r\nimport tensorflow as tf\r\n\r\nstrategy = tf.distribute.MirroredStrategy()\r\n#strategy = tf.distribute.OneDeviceStrategy(\"/gpu:0\")\r\nwith strategy.scope():\r\n tf_model = TFBertForSequenceClassification.from_pretrained('bert-base-cased')\r\n\r\n tokenizer = BertTokenizerFast.from_pretrained('bert-base-cased')\r\n inputs = tokenizer('This is a test', 'Esto es una prueba',\r\n return_tensors='tf', max_length=200,\r\n padding='max_length', truncation=True,\r\n return_attention_mask=True,\r\n return_token_type_ids=False)\r\n print(tf_model.predict([inputs[\"input_ids\"], inputs[\"attention_mask\"]],\r\n verbose=1))\r\n print(tf_model([inputs[\"input_ids\"], inputs[\"attention_mask\"]]))\r\n```\r\n```\r\nAll model checkpoint layers were used when initializing TFBertForSequenceClassification.\r\n\r\nSome layers of TFBertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['classifier']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\nWARNING:tensorflow:From /venv/lib/python3.7/site-packages/tensorflow/python/data/ops/multi_device_iterator_ops.py:601: get_next_as_optional (from tensorflow.python.data.ops.iterator_ops) is deprecated and will be removed in a future version.\r\nInstructions for updating:\r\nUse `tf.data.Iterator.get_next_as_optional()` instead.\r\nWARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).\r\nWARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.\r\nWARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).\r\nWARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.\r\nWARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).\r\nWARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.\r\nWARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).\r\nWARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.\r\n1/1 [==============================] - 0s 241us/step\r\nTFSequenceClassifierOutput(loss=None, logits=None, hidden_states=None, attentions=None)\r\nTFSequenceClassifierOutput(loss=None, logits=<tf.Tensor: shape=(1, 2), dtype=float32, numpy=array([[-0.47814545, 0.35146457]], dtype=float32)>, hidden_states=None, attentions=None)\r\n```\r\nIs this expected to happen? It would be great to be able to use predict function for performance reasons.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,610 | 1,620 | 1,620 | NONE | null | Is this already supported maybe? I know that multi-GPU TRAINING is supported with TF* models pretty well. But not inference. What is the recommended way when one wants to do inference for a large batch of text (tens of millions rows)? Currently only one of GPUs gets loaded. Tensorflow have a [guide ](https://www.tensorflow.org/tutorials/distribute/save_and_load) on how to use model saved in the native tf format to do distributed inference under a scope:
```python
another_strategy = tf.distribute.MirroredStrategy()
with another_strategy.scope():
loaded = tf.saved_model.load(saved_model_path)
inference_func = loaded.signatures[DEFAULT_FUNCTION_KEY]
dist_predict_dataset = another_strategy.experimental_distribute_dataset(
predict_dataset)
# Calling the function in a distributed manner
for batch in dist_predict_dataset:
another_strategy.run(inference_func,args=(batch,))
```
However, it seems that transformers do not support saving in this native format? At least TFDistilBertForSequenceClassification, when loaded back, has damaged input signatures (no attention_mask, wrong sequence length, fake None inputs) and can't process anything. And this very tracker is crowded with similar questions which are left unanswered. Can anyone shed some light on best approach to distributed inference please? Also adding a bullet on this to the documentation would be extremely helpful for many folks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9642/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9642/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9641 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9641/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9641/comments | https://api.github.com/repos/huggingface/transformers/issues/9641/events | https://github.com/huggingface/transformers/issues/9641 | 787,596,565 | MDU6SXNzdWU3ODc1OTY1NjU= | 9,641 | Conditional branching logic in modeling_tf_flaubert.py causing errors with TF Graph | {
"login": "ANarayan",
"id": 5660075,
"node_id": "MDQ6VXNlcjU2NjAwNzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5660075?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ANarayan",
"html_url": "https://github.com/ANarayan",
"followers_url": "https://api.github.com/users/ANarayan/followers",
"following_url": "https://api.github.com/users/ANarayan/following{/other_user}",
"gists_url": "https://api.github.com/users/ANarayan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ANarayan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ANarayan/subscriptions",
"organizations_url": "https://api.github.com/users/ANarayan/orgs",
"repos_url": "https://api.github.com/users/ANarayan/repos",
"events_url": "https://api.github.com/users/ANarayan/events{/privacy}",
"received_events_url": "https://api.github.com/users/ANarayan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello,\r\n\r\nTo help you we need to have more information such as your environement info, and a standalone piece of code to let us reproduce your error. Thanks!",
"As a first guess, I can say that the issue you get might come from a misformed input, can you try with:\r\n```\r\[email protected]\r\ndef train_step(inputs, mask, token_type_ids):\r\n with tf.GradientTape() as tape:\r\n a = model({\r\n \"input_ids\": inputs\r\n \"attention_mask\": mask,\r\n \"token_type_ids\": token_type_ids,\r\n }, training=True)\r\n```",
"That worked! Thank you!!"
] | 1,610 | 1,610 | 1,610 | NONE | null | Hi @jplu !
I am encountering an error when running the TFFlaubert model inside of a tensorflow graph.
Here is some code to reproduce the issue:
```
import FlaubertTokenizer, TFFlaubertModel, FlaubertConfig
import tensorflow as tf
config=FlaubertConfig.from_pretrained('jplu/tf-flaubert-small-cased', output_attentions=True, output_hidden_states=True, return_dict=True)
tokenizer = TFFlaubertModel.from_pretrained(config=config, pretrained_model_name_or_path='jplu/tf-flaubert-small-cased')
@tf.function
def train_step(inputs, mask, token_type_ids):
with tf.GradientTape() as tape:
a = model({
"input_ids": inputs,
"training": True,
"attention_mask": mask,
"token_type_ids": token_type_ids,
})
train_step(inputs, mask, token_type_ids)
```
The error seems to be caused by L611-624 in modeling_tf_flaubert.py [here](https://github.com/huggingface/transformers/blob/c60e0e1ee45f4bf1017736b146c51729f120bb83/src/transformers/models/flaubert/modeling_tf_flaubert.py#L611)
The error message is as follows:
> TypeError: in user code:
>
> python-input-5-4a1e131ff478:4 train_step *
> a = model({
> /Users/ludwig/venv/lib/python3.6/site-packages/transformers/models/flaubert/modeling_tf_flaubert.py:274 call *
> outputs = self.transformer(
> /Users/ludwig/venv/lib/python3.6/site-packages/transformers/models/flaubert/modeling_tf_flaubert.py:616 call *
> for i in range(self.n_layers):
> /Users/ludwig/venv/lib/python3.6/site-packages/tensorflow/python/autograph/operators/control_flow.py:1163 if_stmt
> _tf_if_stmt(cond, body, orelse, get_state, set_state, symbol_names, nouts)
> /Users/ludwig/venv/lib/python3.6/site-packages/tensorflow/python/autograph/operators/control_flow.py:1210 _tf_if_stmt
> cond, aug_body, aug_orelse, strict=True)
> /Users/udwig/venv/lib/python3.6/site-packages/tensorflow/python/util/dispatch.py:201 wrapper
> return target(*args, **kwargs)
> /Users/ludwig/venv/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py:538 new_func
> return func(*args, **kwargs)
> /Users/ludwig/venv/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py:1180 cond
> return cond_v2.cond_v2(pred, true_fn, false_fn, name)
> /Users/ludwig/venv/lib/python3.6/site-packages/tensorflow/python/ops/cond_v2.py:96 cond_v2
> op_return_value=pred)
> /Users/ludwig/venv/lib/python3.6/site-packages/tensorflow/python/framework/func_graph.py:990 func_graph_from_py_func
> func_outputs = python_func(*func_args, **func_kwargs)
> /Users/ludwig/venv/lib/python3.6/site-packages/tensorflow/python/autograph/operators/control_flow.py:1206 aug_orelse
> _verify_tf_cond_vars(new_body_vars_[0], new_orelse_vars, symbol_names)
> /Users/ludwig/venv/lib/python3.6/site-packages/tensorflow/python/autograph/operators/control_flow.py:365 _verify_tf_cond_vars
> ' branches:\n\n{}'.format(name, str(e)))
>
> TypeError: 'hidden_states' must have the same nested structure in the main and else branches:
>
> The two structures don't have the same nested structure.
>
> First structure: type=tuple str=(<tf.Tensor 'tf_flaubert_model/transformer/mul:0' shape=(18, 44, 512) dtype=float32>,)
>
> Second structure: type=tuple str=()
>
> More specifically: The two structures don't have the same number of elements. First structure: type=tuple str=(<tf.Tensor 'tf_flaubert_model/transformer/mul:0' shape=(18, 44, 512) dtype=float32>,). Second structure: type=tuple str=()
> Entire first structure:
> (.,)
> Entire second structure:
> ()
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9641/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9641/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9640 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9640/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9640/comments | https://api.github.com/repos/huggingface/transformers/issues/9640/events | https://github.com/huggingface/transformers/pull/9640 | 787,587,683 | MDExOlB1bGxSZXF1ZXN0NTU2MjU2NTQw | 9,640 | Renamed `nlp` variables #9455 | {
"login": "terrenceedmonds",
"id": 22152969,
"node_id": "MDQ6VXNlcjIyMTUyOTY5",
"avatar_url": "https://avatars.githubusercontent.com/u/22152969?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/terrenceedmonds",
"html_url": "https://github.com/terrenceedmonds",
"followers_url": "https://api.github.com/users/terrenceedmonds/followers",
"following_url": "https://api.github.com/users/terrenceedmonds/following{/other_user}",
"gists_url": "https://api.github.com/users/terrenceedmonds/gists{/gist_id}",
"starred_url": "https://api.github.com/users/terrenceedmonds/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/terrenceedmonds/subscriptions",
"organizations_url": "https://api.github.com/users/terrenceedmonds/orgs",
"repos_url": "https://api.github.com/users/terrenceedmonds/repos",
"events_url": "https://api.github.com/users/terrenceedmonds/events{/privacy}",
"received_events_url": "https://api.github.com/users/terrenceedmonds/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I can change `unmask` to `unmasker`. And I'll go back and take care of the merge conflicts and `make style` . ",
"I think your rebase went wrong as the diff as suddenly become unreadable. Could you close this PR and open a new one from your branch? Don't hesitate to tag me on it.",
"Hi @terrenceedmonds I don't think you ever opened a new clean PR from your branch (might need a new rebase first since it's been a while). You had done all the work for this issue so it would be great to merge it!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,610 | 1,619 | 1,619 | NONE | null | * Give better names to pipeline variables named nlp
* This was desired because nlp was not a descriptive variable name
Fixes # 9455
@Narsil , @sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9640/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9640/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9640",
"html_url": "https://github.com/huggingface/transformers/pull/9640",
"diff_url": "https://github.com/huggingface/transformers/pull/9640.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9640.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9639 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9639/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9639/comments | https://api.github.com/repos/huggingface/transformers/issues/9639/events | https://github.com/huggingface/transformers/pull/9639 | 787,569,678 | MDExOlB1bGxSZXF1ZXN0NTU2MjQ0ODQ5 | 9,639 | Add head_mask/decoder_head_mask for TF BART models | {
"login": "stancld",
"id": 46073029,
"node_id": "MDQ6VXNlcjQ2MDczMDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/46073029?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stancld",
"html_url": "https://github.com/stancld",
"followers_url": "https://api.github.com/users/stancld/followers",
"following_url": "https://api.github.com/users/stancld/following{/other_user}",
"gists_url": "https://api.github.com/users/stancld/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stancld/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stancld/subscriptions",
"organizations_url": "https://api.github.com/users/stancld/orgs",
"repos_url": "https://api.github.com/users/stancld/repos",
"events_url": "https://api.github.com/users/stancld/events{/privacy}",
"received_events_url": "https://api.github.com/users/stancld/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@stancld, thanks so much for tackling this! I think it would be a great addition if we could add a `test_headmasking` method for TF as well. \r\nI think it might be better if we don't try to have the exact test as in PyTorch. For now it should be enough to just leave out all gradient-related statements, like `.backward()`, `requires_grad(...)` in TF. The attentions output should still be 0 accordingly. ",
"Hey @patrickvonplaten, I hope this PR is ready for review. There's newly implemented `test_headmasking` method which follows the method from PyTorch testing except for the gradient-related statements as you pointed above.\r\n\r\nIt seems all checks have passed after rebasing this PR.",
"Also, @jplu it would be great if you could take a quick look if this is all serving compatible (I don't see a reason why it wouldn't be though)",
"Just done further tests on your PR and the changes are not graph compliant and the following slow tests are failing:\r\n\r\n- test_saved_model_creation\r\n- test_saved_model_creation_extended\r\n\r\nOne of the reasons is what @sgugger raised.",
"@jplu @sgugger Thank you very much for your comments and suggested solution. I'll try to fix these issues and send a new commit!",
"Hi @jplu, could you, please, review the changes in the code I've done to say whether assertions are done more appropriately now? :)\nI've been also struggling to run (on my local) those four slow tests you mentioned last time, but I'm gonna have a look at that at the weekend if we're afraid of not passing.",
"I confirm that the assertions are done more appropriately now! Those four tests are among the most important one for the TF code base (they are run in slow mode because unfortunately they take some time to be executed).\r\n\r\nIf you need some help to make them pass, I will be happy to.",
"> @jplu I removed `global_rng` and leave it as it was before changes. Hopefully, now this PR is ready for a final review\r\n\r\nAre these tests finally pass? :\r\n* test_saved_model_with_hidden_states_output\r\n* test_saved_model_with_attentions_output\r\n* test_saved_model_creation\r\n* test_saved_model_creation_extended\r\n\r\nIf yes, I will approve the PR :)",
"@jplu I ran these 4 aforementioned tests for BART and all those tests passed.",
"Merging, thanks a lot for your efforts @stancld!!"
] | 1,610 | 1,611 | 1,611 | CONTRIBUTOR | null | This PR adds `head_mask` and `decoder_head_mask` input arguments for TF BART-based models. The full list of models is as follows:
* **TFBART**
* **TFMBart**
* **TFBlenderbot**
* **TFBlenderbotSmall**
* **TFMarian**
* **TFPegasus**
This PR can be deemed as a TF counterpart to the PR #9569.
<hr>
**Further information:**
* I've added `test_headmasking` functionality to `tests/test_modeling_tf_common.py`
* **_TODO_**: Add a test (as a part of `test_headmasking`) to verify that we can get a gradient back for importance score computation. I am not so familiar with TensorFlow, therefore, I am not fully sure with a TF equivalent to
```
outputs = model(**inputs, return_dict=True)
output = sum(t.sum() for t in outputs[0])
output = output.sum()
output.backward()
```
<hr>
Reviewer: @patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9639/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9639/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9639",
"html_url": "https://github.com/huggingface/transformers/pull/9639",
"diff_url": "https://github.com/huggingface/transformers/pull/9639.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9639.patch",
"merged_at": 1611651001000
} |
https://api.github.com/repos/huggingface/transformers/issues/9638 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9638/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9638/comments | https://api.github.com/repos/huggingface/transformers/issues/9638/events | https://github.com/huggingface/transformers/issues/9638 | 787,551,145 | MDU6SXNzdWU3ODc1NTExNDU= | 9,638 | ValueError: Expected floating point type, got <dtype: 'int32'> for TFGPT2LMHeadModel | {
"login": "farazk86",
"id": 33456896,
"node_id": "MDQ6VXNlcjMzNDU2ODk2",
"avatar_url": "https://avatars.githubusercontent.com/u/33456896?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/farazk86",
"html_url": "https://github.com/farazk86",
"followers_url": "https://api.github.com/users/farazk86/followers",
"following_url": "https://api.github.com/users/farazk86/following{/other_user}",
"gists_url": "https://api.github.com/users/farazk86/gists{/gist_id}",
"starred_url": "https://api.github.com/users/farazk86/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/farazk86/subscriptions",
"organizations_url": "https://api.github.com/users/farazk86/orgs",
"repos_url": "https://api.github.com/users/farazk86/repos",
"events_url": "https://api.github.com/users/farazk86/events{/privacy}",
"received_events_url": "https://api.github.com/users/farazk86/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@jplu do you have any experience with creating model environments on GCP with TensorFlow?",
"Hello!\r\n\r\nCan you first share your local env and the env you are using on your GCP machine? ",
"Hi @jplu \r\n\r\nThanks for the response. \r\n\r\nFor my local environment I'm on Python 3.6, tenensorflow 2.4, transformers 2.8.0. Is this what you meant by local environment? \r\n\r\nOn GCP, here is my model version settings:\r\n\r\n```\r\nModel sentence_generator\r\nModel location gs://gpt2-checkpoint/tensorflow-model/\r\nCreation time Jan 16, 2021, 3:51:39 PM\r\nLast use time\r\nPython version 3.7\r\nRuntime version 2.2\r\nCustom code and dependencies gs://gpt2-checkpoint/staging-dist/generator_package-0.6.tar.gz\r\nPrediction class generator_class_tf.GeneratorClass\r\nMachine type Single core CPU\r\nAuto scaling minimum nodes 1\r\n```\r\nBelow is my ``setup.py`` file:\r\n\r\n```python\r\nfrom setuptools import setup\r\n\r\n\r\nsetup(\r\n name=\"generator_package\",\r\n version=\"0.6\",\r\n include_package_data=True,\r\n scripts=[\"generator_class_tf.py\"],\r\n install_requires=['transformers==2.8.0']\r\n)\r\n```\r\n",
"Yes, this is what I meant. I see that you are using an old version of transformers, can you update to the last release please.",
"> Yes, this is what I meant. I see that you are using an old version of transformers, can you update to the last release please.\r\n\r\nWith the latest version of transformers I get this error:\r\n\r\n> Create Version failed. Bad model detected with error: \"Failed to load model: Unexpected error when loading the model: problem in generator_class_tf - DistributionNotFound: The 'tqdm>=4.27' distribution was not found and is required by this application, \\nTry: pip install transformers -U or pip install -e '.[dev]' if you're working with git master (Error code: 0)\"\r\n\r\nAnd to the best of my knowledge, I dont think we can ``pip install`` anything with Google cloud prediction environment.\r\n\r\n",
"You can just replace your `setup.py` file with\r\n```\r\nfrom setuptools import setup\r\n\r\n\r\nsetup(\r\n name=\"generator_package\",\r\n version=\"0.6\",\r\n include_package_data=True,\r\n scripts=[\"generator_class_tf.py\"],\r\n install_requires=['transformers==4.2.1']\r\n)\r\n```",
"> You can just replace your `setup.py` file with\r\n> \r\n> ```\r\n> from setuptools import setup\r\n> \r\n> \r\n> setup(\r\n> name=\"generator_package\",\r\n> version=\"0.6\",\r\n> include_package_data=True,\r\n> scripts=[\"generator_class_tf.py\"],\r\n> install_requires=['transformers==4.2.1']\r\n> )\r\n> ```\r\n\r\nThanks but this is how I had my ``setup.py`` when I got the error above relating to ``tqdm``. ",
"Ok, then did you try:\r\n```\r\nfrom setuptools import setup\r\n\r\n\r\nsetup(\r\n name=\"generator_package\",\r\n version=\"0.6\",\r\n include_package_data=True,\r\n scripts=[\"generator_class_tf.py\"],\r\n install_requires=['transformers==4.2.1', 'tqdm>=4.27']\r\n)\r\n```",
"Yes, I have. still get the same error :(\r\n\r\n> Create Version failed. Bad model detected with error: \"Failed to load model: Unexpected error when loading the model: problem in generator_class_tf - DistributionNotFound: The 'tqdm>=4.27' distribution was not found and is required by this application, \\nTry: pip install transformers -U or pip install -e '.[dev]' if you're working with git master (Error code: 0)\"",
"According to this page, it should work, https://cloud.google.com/ai-platform/training/docs/packaging-trainer\r\n\r\nSo the problem might come from somewhere else. I suppose you can run your model as expected locally?",
"Yes, that's correct. It works without problems on my own machine.\r\n\r\nThis looks to be a problem with GCP. I'll lodge this as a bug on their issue tracker and update here on any progress made.\r\n\r\n",
"I'm seeing the same issue for my deployment, using transformers 4.5.0. But it seems indeed to be a GCP issue.\r\nHave you seen any comments from Google on this @farazk86 ?\r\n\r\nThanks in advance!",
"> I'm seeing the same issue for my deployment, using transformers 4.5.0. But it seems indeed to be a GCP issue.\r\n> Have you seen any comments from Google on this @farazk86 ?\r\n> \r\n> Thanks in advance!\r\n\r\nYes, this is a GCP issue.\r\nUnfortunately, I gave up in the end. As the issue I created on Google issue tracker also did not help, they were asking for me to provide information from methods within transformers library that I was not familiar with or knew about. It was too much of a hassle - I just gave up.",
"Alright, thanks for the quick reply! That is too bad, I will keep trying myself, and let you know if I find a solution.\r\n\r\nJust for my curiosity, did you instead take any alternative approach (than using Custom Prediction Routines) in order to serve a Transformer model on Google?\r\nI read some people had success with using Docker containers on the \"Cloud Run\" API.",
"No, I just gave up on cloud entirely. And you are right, I had also determined that Docker works as other people on stackoverflow had achieved to deploy using Docker. But I had no experience with docker and just moved on to other projects.\r\n\r\nIf you do manage to figure it out, then yes please do let me know even though now the $300 introductory credits are also expired :)",
"I will keep you informed on my progress for sure.\r\n\r\nCould you provide the link to the issue tracker / bug report you submitted with GCP? \r\nIf that is a public page that is.\r\n\r\nThanks in advance!",
"sure, I'll try to find it. ",
"> I will keep you informed on my progress for sure.\r\n> \r\n> Could you provide the link to the issue tracker / bug report you submitted with GCP?\r\n> If that is a public page that is.\r\n> \r\n> Thanks in advance!\r\n\r\nHere are both my submitted issues, based on my multiple tries at making this work: https://issuetracker.google.com/issues/177648341 and https://issuetracker.google.com/issues/178236762",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,610 | 1,621 | 1,621 | NONE | null | Hi,
I am trying to serve a gpt2 model online using Google cloud. But when creating model environment I get the error:
```
Create Version failed. Bad model detected with error: "Failed to load model: Unexpected error when loading the model: in user code:\n\n /tmp/custom_lib/transformers/modeling_tf_gpt2.py:551 call *\n transformer_outputs = self.transformer(inputs, **kwargs)\n /tmp/custom_lib/transformers/modeling_tf_gpt2.py:321 call *\n inputs_embeds = self.wte(input_ids, mode=\"embedding\")\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py:758 __call__ **\n self._maybe_build(inputs)\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py:2131 _maybe_build\n self.build(input_shapes)\n /tmp/custom_lib/transformers/modeling_tf_utils.py:1522 build\n \"weight\", shape=[self.vocab_size, self.hidden_size], initializer=get_initializer(self.initializer_range)\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py:447 add_weight\n caching_device=caching_device)\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/training/tracking/base.py:743 _add_variable_with_custom_getter\n **kwargs_for_getter)\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/base_layer_utils.py:141 make_variable\n shape=variable_shape if variable_shape else None)\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/variables.py:259 __call__\n return cls._variable_v1_call(*args, **kwargs)\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/variables.py:220 _variable_v1_call\n shape=shape)\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/variables.py:198 <lambda>\n previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/variable_scope.py:2598 default_variable_creator\n shape=shape)\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/variables.py:263 __call__\n return super(VariableMetaclass, cls).__call__(*args, **kwargs)\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/resource_variable_ops.py:1434 __init__\n distribute_strategy=distribute_strategy)\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/resource_variable_ops.py:1567 _init_from_args\n initial_value() if init_from_fn else initial_value,\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/base_layer_utils.py:121 <lambda>\n init_val = lambda: initializer(shape, dtype=dtype)\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/init_ops_v2.py:445 __call__\n dtype = _assert_float_dtype(dtype)\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/init_ops_v2.py:1037 _assert_float_dtype\n raise ValueError(\"Expected floating point type, got %s.\" % dtype)\n\n ValueError: Expected floating point type, got <dtype: 'int32'>.\n (Error code: 0)"
```
I saved the model from a fine-tuned GPT2 model:
```python
tf_model = TFGPT2LMHeadModel.from_pretrained("checkpoint-8000", from_pt=True)
tf_model.save_pretrained("tensorflow-model")
model_class, tokenizer_class = TFGPT2LMHeadModel, GPT2Tokenizer
tokenizer = tokenizer_class.from_pretrained('tensorflow-model')
model = model_class.from_pretrained('tensorflow-model')
```
This model works on my local machine using ``model.generate()``. But I get the error above when creating model environment on GCP.
I dont know if this is a Google cloud issue or transformers issue. However, when looking at the model created by the line
```python
model = model_class.from_pretrained('tensorflow-model')
```
I can see that the ``model.dtype`` and ``model.variable_dtype`` is float32.
Can anyone help why google cloud thinks this model expects a ``float32`` input and not ``int32``. Can I change anything in model to ensure that correct input dtype?
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9638/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9638/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9637 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9637/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9637/comments | https://api.github.com/repos/huggingface/transformers/issues/9637/events | https://github.com/huggingface/transformers/issues/9637 | 787,548,366 | MDU6SXNzdWU3ODc1NDgzNjY= | 9,637 | XLMRobertaTokenizerFast producing wrong tokenized output | {
"login": "sstojanoska",
"id": 17052700,
"node_id": "MDQ6VXNlcjE3MDUyNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/17052700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sstojanoska",
"html_url": "https://github.com/sstojanoska",
"followers_url": "https://api.github.com/users/sstojanoska/followers",
"following_url": "https://api.github.com/users/sstojanoska/following{/other_user}",
"gists_url": "https://api.github.com/users/sstojanoska/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sstojanoska/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sstojanoska/subscriptions",
"organizations_url": "https://api.github.com/users/sstojanoska/orgs",
"repos_url": "https://api.github.com/users/sstojanoska/repos",
"events_url": "https://api.github.com/users/sstojanoska/events{/privacy}",
"received_events_url": "https://api.github.com/users/sstojanoska/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"There are two different subjects being discussed here:\r\n - The tokenization behavior: how punctuation is tokenized, or how the blank spaces are separated from the next token. This is expected behavior and just describes the way this tokenizer (XLMRoberta) works.\r\n - The offset mappings, which as described here are wrong in some cases. These need to be fixed, and I am going to describe a bit more the problem and how we are going to solve it below.\r\n\r\n### Cause\r\n\r\nThis bug in offset mapping actually affects **all** the fast tokenizers converted from sentencepiece. During the pre-tokenization step, we first split everything on whitespaces (`WhitespaceSplit` pre-tokenizer), and in a second step, we add the `▁` character in front of each word (`Metaspace` pre-tokenizer). This process is accurate in terms of tokenization, but it makes the offset tracking very difficult:\r\n - All the whitespaces get removed, so we won't have any token pointing back to them.\r\n - We add a \"new\" `▁` in front of each word, so these tokens actually point back to the beginning of each word: the first character.\r\n\r\n### How to fix it\r\n\r\nThe initial idea of using the `WhitespaceSplit` in a first step was simply to deduplicate the whitespaces but since it leads to loss of information we'll replace it with the following process:\r\n - Normalization step that replaces groups of whitespaces with a single one, effectively mapping the single whitespace to the group in the original input.\r\n - Pretokenization step: we just keep the `Metaspace` pre-tokenizer.\r\n\r\nIn order to fix this we need to:\r\n1. Update all the `tokenizer.json` files on the hub, and it will be compatible with any version of `transformers` since we introduced these fast tokenizers (3.5.0+).\r\n2. Update all the conversion steps in `transformers` to avoid creating more buggy tokenizers.",
"### List of updated tokenizers:\r\n\r\n- https://huggingface.co/google/pegasus-xsum\r\n- https://huggingface.co/google/reformer-crime-and-punishment\r\n\r\n### These can't be fixed this way:\r\nThe following will need a new version of `transformers` with a bugfix in `tokenizers`. We'll need to find a way to rely on the new `tokenizer.json` version only on versions of `transformers` that include this bugfix, as it would break all the previous ones.\r\n\r\n- https://huggingface.co/albert-base-v1\r\n- https://huggingface.co/albert-base-v2\r\n- https://huggingface.co/albert-large-v1\r\n- https://huggingface.co/albert-large-v2\r\n- https://huggingface.co/albert-xlarge-v1\r\n- https://huggingface.co/albert-xlarge-v2\r\n- https://huggingface.co/albert-xxlarge-v1\r\n- https://huggingface.co/albert-xxlarge-v2\r\n- https://huggingface.co/camembert-base\r\n- https://huggingface.co/facebook/mbart-large-en-ro\r\n- https://huggingface.co/moussaKam/barthez\r\n- https://huggingface.co/moussaKam/barthez-orangesum-title\r\n- https://huggingface.co/moussaKam/mbarthez\r\n- https://huggingface.co/t5-11b\r\n- https://huggingface.co/t5-3b\r\n- https://huggingface.co/t5-base\r\n- https://huggingface.co/t5-large\r\n- https://huggingface.co/t5-small\r\n- https://huggingface.co/xlm-roberta-base\r\n- https://huggingface.co/xlm-roberta-large\r\n- https://huggingface.co/xlm-roberta-large-finetuned-conll02-dutch\r\n- https://huggingface.co/xlm-roberta-large-finetuned-conll02-spanish\r\n- https://huggingface.co/xlm-roberta-large-finetuned-conll03-english\r\n- https://huggingface.co/xlm-roberta-large-finetuned-conll03-german\r\n- https://huggingface.co/xlnet-base-cased\r\n- https://huggingface.co/xlnet-large-cased",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Unstale",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Any update on this one?",
"Bump",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,610 | 1,624 | 1,624 | NONE | null | ## Environment info
- transformers` version: 4.2.1
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101 (False)
- Tensorflow version (GPU?): 2.4.0 (False)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@mfuntowicz
@stefan-it
## Information
Model I am using is XLM-RoBERTa.
The problem arises when using XLMRobertaTokenizerFast tokenizer.
The tasks I am working on is token-classification. In order to align the labels with the sub-word units I have used the code snippet provided here: https://huggingface.co/transformers/custom_datasets.html [ Fine-tuning with custom datasets/Token Classification with W-NUT Emerging Entities ].
When trying to align the labels with the encodings, it throws: "ValueError: NumPy boolean array indexing assignment cannot assign X input values to the Y output values where the mask is true."
This behavior is due to tokenizing punctuation. Moreover comma ( ' , ' ) gets tokenized into '__' and ',' ( having offset values (0,1) ) Similar behavior happens with dot. However, some other punctuation marks are producing only one token (i.g. ' : ' -> '__:').
In addition, the offset_mapping value for ':' is different in different sentences resulting either in (0,0) or (0,3) tuple. The problem is that padding tokens have offset tuple with values (0,0) which are excluded from alignment, but in this case I have to preserve the punctuation since it is POS tagging problem.
## To reproduce
```
print("Token: {} Offset_mapping: {}".format(train_encodings[338].tokens[67], train_encodings[338].offsets[67]))
# Token: ▁... Offset_mapping: (0, 0)
print("Token: {} Offset_mapping: {}".format(train_encodings[20].tokens[2], train_encodings[20].offsets[2]))
# Token: ▁... Offset_mapping: (0, 3)
```
Moreover, although I fixed this issue by writing my own masks, I found new issue: the blank space which denotes start of the word is tokenized as separate token instead of being together with the starting sub-token.
## To reproduce
```
tokenizer = XLMRobertaTokenizerFast.from_pretrained("xlm-roberta-base")
model= XLMRobertaForTokenClassification.from_pretrained("xlm-roberta-base")
s = "Je često kritizirao vladu ."
print(tokenizer.tokenize(s))
# output: ['▁Je', '▁često', '▁krit', 'izira', 'o', '▁', 'vlad', 'u', '▁', '.']
```
## Expected behavior
1. Punctuation marks should be consistently tokenized and having offset values different from padding tokens.
2. The first sub-word token should be with preceding blank space everywhere.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9637/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9637/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9636 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9636/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9636/comments | https://api.github.com/repos/huggingface/transformers/issues/9636/events | https://github.com/huggingface/transformers/issues/9636 | 787,475,043 | MDU6SXNzdWU3ODc0NzUwNDM= | 9,636 | key error when use trainer to fine_tuning a dataset | {
"login": "XiaoYang66",
"id": 43234824,
"node_id": "MDQ6VXNlcjQzMjM0ODI0",
"avatar_url": "https://avatars.githubusercontent.com/u/43234824?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/XiaoYang66",
"html_url": "https://github.com/XiaoYang66",
"followers_url": "https://api.github.com/users/XiaoYang66/followers",
"following_url": "https://api.github.com/users/XiaoYang66/following{/other_user}",
"gists_url": "https://api.github.com/users/XiaoYang66/gists{/gist_id}",
"starred_url": "https://api.github.com/users/XiaoYang66/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/XiaoYang66/subscriptions",
"organizations_url": "https://api.github.com/users/XiaoYang66/orgs",
"repos_url": "https://api.github.com/users/XiaoYang66/repos",
"events_url": "https://api.github.com/users/XiaoYang66/events{/privacy}",
"received_events_url": "https://api.github.com/users/XiaoYang66/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I found this Jesus is caused by this description `Here we have the loss since we passed along labels`(url:https://huggingface.co/transformers/main_classes/output.html).so if the column dataset object do not have label(or if the column which represents label have other name ,like'entailment_judgment').the trainer can not recognize this column .",
"so I add some line like this :\r\n`def change_transformers_dataset_2_right_format(dataset, label_name):\r\n return dataset.map(lambda example: {'label': example[label_name]}, remove_columns=[label_name])`.it works fine.\r\n",
"I found a lot of dataset ,upload by user, the name of the column which represents 'label' have other name!\r\nmaybe it is better to unify a standard either on dataset or on trainer",
"and I can not visit your forum .I do not know why.and this is wired.can you please help me.thanks a lot!",
"The script is not meant to work out of the box on any dataset, it is an example. If the columns are named differently than the usual glue datasets, it's logical you have to change one line.\r\n\r\nPlease do not post the same issues several times.",
"ok, thanks for your reply .and do you know why I can not visit your forum? is there some special setting in you firewall for your forum? @sgugger ",
"I'm not aware of any firewall problem, you're the first user reporting an issue to connect to them, to be honest.",
"I have this same problem. \r\n```py\r\nfrom transformers import TrainingArguments, Trainer\r\nimport numpy as np\r\nimport evaluate\r\n\r\ntraining_args = TrainingArguments(output_dir=\"test_trainer\", evaluation_strategy=\"epoch\")\r\nmetric = evaluate.load(\"accuracy\")\r\n\r\ndef compute_metrics(eval_pred):\r\n logits, labels = eval_pred\r\n predictions = np.argmax(logits, axis=-1)\r\n return metric.compute(predictions=predictions, references=labels)\r\n \r\ndef train(model, train, eval, **kwargs):\r\n print('Training model...')\r\n trainer = Trainer(\r\n model=model,\r\n train_dataset=train, #Dataset to train it with\r\n eval_dataset=eval, #Dataset to test it with\r\n compute_metrics=compute_metrics,\r\n **kwargs\r\n ) \r\n trainer.train()\r\n trainer.save_model('adkai')\r\n print('Trained!')\r\n \r\nmodel.train(True)\r\ntrain(model, {\r\n '#print Hello World':'stdout.write(\"Hello World\\n\")',\r\n '#print hello World':'stdout.write(\"hello World\\n\")',\r\n '# print Hello world':'stdout.write(\"Hello world\\n\")',\r\n '#print hello world':'stdout.write(\"hello world\\n\")',\r\n '#print Hello World!':'stdout.write(\"Hello World!\\n\")',\r\n '# print hello World!':'stdout.write(\"hello World!\\n\")',\r\n '#print goodbye World!':'stdout.write(\"goodbye World!\\n\")',\r\n '# write Hello World':'stdout.write(\"Hello World\\n\")',\r\n '#write hello World':'stdout.write(\"hello World\\n\")',\r\n '# write Hello world':'stdout.write(\"Hello world\\n\")',\r\n '#write hello world':'stdout.write(\"hello world\\n\")',\r\n '# write Hello World!':'stdout.write(\"Hello World!\\n\")',\r\n 'set x = 5\\n#print x':'stdout.write(x, \"\\n\")',\r\n 'set x = \"Go home\"\\n#output x':'stdout.write(x, \"\\n\")',\r\n 'set xyz = \"Hello\"# output xyz':'stdout.write(xyz, \"\\n\")', \r\n 'set Whatever = \"nothing\"\\n#output Whatever':'stdout.write(Whatever, \"\\n\")',\r\n '#output Whatever':'stdout.write(\"Whatever\\n\")',\r\n '':'',\r\n '':''\r\n}, {\r\n '#write Hello world!':'stdout.write(\"Hello world!\\n\")',\r\n '':'',\r\n '# output Hello World!':'stdout.write(\"Hello World!\\n\")',\r\n})\r\n```\r\n(only partial code)\r\n\r\n\r\nPlease help, this is the error\r\n```py\r\nTraceback (most recent call last):\r\n File \"main.py\", line 18, in <module>\r\n train.train(model, {\r\n File \"/home/runner/AdkAI/train.py\", line 23, in train\r\n trainer.train()\r\n File \"/home/runner/AdkAI/venv/lib/python3.8/site-packages/transformers/trainer.py\", line 1500, in train\r\n return inner_training_loop(\r\n File \"/home/runner/AdkAI/venv/lib/python3.8/site-packages/transformers/trainer.py\", line 1716, in _inner_training_loop\r\n for step, inputs in enumerate(epoch_iterator):\r\n File \"/home/runner/AdkAI/venv/lib/python3.8/site-packages/torch/utils/data/dataloader.py\", line 681, in __next__\r\n data = self._next_data()\r\n File \"/home/runner/AdkAI/venv/lib/python3.8/site-packages/torch/utils/data/dataloader.py\", line 721, in _next_data\r\n data = self._dataset_fetcher.fetch(index) # may raise StopIteration\r\n File \"/home/runner/AdkAI/venv/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py\", line 49, in fetch\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"/home/runner/AdkAI/venv/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py\", line 49, in <listcomp>\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\nKeyError: 2\r\n```"
] | 1,610 | 1,666 | 1,610 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.1.1
- Platform: Linux-3.10.0-693.5.2.el7.x86_64-x86_64-with-centos-7.4.1708-Core
- Python version: 3.7.9
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
ray/raytune: @richardliaw @amogkam
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@sgugger
## Information
Model I am using (Bert, XLNet ...):bert-base-uncased
The problem arises when using:
* the official example scripts: (give details below)
i am fine-tuning a text_claasifiction on dbpedia_14.and i followed this colab https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/text_classification.ipynb#scrollTo=TlqNaB8jIrJW
The tasks I am working on is:
* an official GLUE/SQUaD task: (give the name)
datset:dbpedia_14
## To reproduce
Steps to reproduce the behavior:
error
`File "train.py", line 69, in <module>
trainer.train()
File "/home/pliu3/projects/anaconda3/envs/calibration/lib/python3.7/site-packages/transformers/trainer.py", line 784, in train
for step, inputs in enumerate(epoch_iterator):
File "/home/pliu3/projects/anaconda3/envs/calibration/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 435, in __next__
data = self._next_data()
File "/home/pliu3/projects/anaconda3/envs/calibration/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 475, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/home/pliu3/projects/anaconda3/envs/calibration/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/pliu3/projects/anaconda3/envs/calibration/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
KeyError: 2`
code
```python
dataset_name = 'sem_eval_2014_task_1'
num_labels_size = 3
batch_size = 4
model_checkpoint = 'bert-base-uncased'
number_train_epoch = 5
def tokenize(batch):
return tokenizer(batch['premise'], batch['hypothesis'], truncation=True, )
def compute_metrics(pred):
labels = pred.label_ids
preds = pred.predictions.argmax(-1)
precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='micro')
acc = accuracy_score(labels, preds)
return {
'accuracy': acc,
'f1': f1,
'precision': precision,
'recall': recall
}
model = BertForSequenceClassification.from_pretrained(model_checkpoint, num_labels=num_labels_size)
tokenizer = BertTokenizerFast.from_pretrained(model_checkpoint, use_fast=True)
train_dataset = load_dataset(dataset_name, split='train')
test_dataset = load_dataset(dataset_name, split='test')
train_encoded_dataset = train_dataset.map(tokenize, batched=True)
test_encoded_dataset = test_dataset.map(tokenize, batched=True)
args = TrainingArguments(
output_dir='./results',
evaluation_strategy="epoch",
learning_rate=2e-5,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
num_train_epochs=number_train_epoch,
weight_decay=0.01,
do_predict=True
)
trainer = Trainer(
model=model,
args=args,
compute_metrics=compute_metrics,
train_dataset=train_encoded_dataset,
eval_dataset=test_encoded_dataset,
tokenizer=tokenizer
)
trainer.train()
trainer.evaluate()
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9636/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9636/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9635 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9635/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9635/comments | https://api.github.com/repos/huggingface/transformers/issues/9635/events | https://github.com/huggingface/transformers/issues/9635 | 787,456,819 | MDU6SXNzdWU3ODc0NTY4MTk= | 9,635 | Weights used for Masked LM predictions | {
"login": "simran-khanuja",
"id": 24687672,
"node_id": "MDQ6VXNlcjI0Njg3Njcy",
"avatar_url": "https://avatars.githubusercontent.com/u/24687672?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/simran-khanuja",
"html_url": "https://github.com/simran-khanuja",
"followers_url": "https://api.github.com/users/simran-khanuja/followers",
"following_url": "https://api.github.com/users/simran-khanuja/following{/other_user}",
"gists_url": "https://api.github.com/users/simran-khanuja/gists{/gist_id}",
"starred_url": "https://api.github.com/users/simran-khanuja/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/simran-khanuja/subscriptions",
"organizations_url": "https://api.github.com/users/simran-khanuja/orgs",
"repos_url": "https://api.github.com/users/simran-khanuja/repos",
"events_url": "https://api.github.com/users/simran-khanuja/events{/privacy}",
"received_events_url": "https://api.github.com/users/simran-khanuja/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The `cls/predictions/decoder` is the linear layer that is used to project the output of the transformer to the vocabulary logits. This layer is *tied* to the input embeddings: it has the same weights.\r\n\r\nI believe the original BERT codebase doesn't have this layer because it re-uses the input embedding layer, instead of instantiating another one. We do this too in the TF implementation.\r\n\r\nYou can, therefore, safely disregard this layer is the implementation you're using uses the input embeddings' weights to project the output of the transformer to the vocabulary logits. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,610 | 1,618 | 1,618 | NONE | null | I wanted to get masked word predictions for a few bert-base models. I am converting the pytorch models to the original bert tf format using [this](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/convert_bert_pytorch_checkpoint_to_original_tf.py) by modifying the code to load BertForPreTraining state_dict. I am unaware of the use of cls/predictions/decoder in the snippet below, to make the masked predictions. The original BERT codebase does not have this layer, hence. Is it used, or can I safely disregard this to obtain predictions?

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9635/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9635/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9634 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9634/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9634/comments | https://api.github.com/repos/huggingface/transformers/issues/9634/events | https://github.com/huggingface/transformers/pull/9634 | 787,446,756 | MDExOlB1bGxSZXF1ZXN0NTU2MTYwODAy | 9,634 | Add separated decoder_head_mask for T5 Models | {
"login": "stancld",
"id": 46073029,
"node_id": "MDQ6VXNlcjQ2MDczMDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/46073029?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stancld",
"html_url": "https://github.com/stancld",
"followers_url": "https://api.github.com/users/stancld/followers",
"following_url": "https://api.github.com/users/stancld/following{/other_user}",
"gists_url": "https://api.github.com/users/stancld/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stancld/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stancld/subscriptions",
"organizations_url": "https://api.github.com/users/stancld/orgs",
"repos_url": "https://api.github.com/users/stancld/repos",
"events_url": "https://api.github.com/users/stancld/events{/privacy}",
"received_events_url": "https://api.github.com/users/stancld/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Great, that looks nice! Let's first merge https://github.com/huggingface/transformers/pull/9569 and then rebase this PR so that it passes all tests :-) ",
"Thanks for fixing this!\r\n\r\nI have one note/question: This seems to only apply to self-attention heads, not heads in the cross attention module, right? Is this intentional?",
"@talkhaldi Thank you very much for pointing this out. It seems you're right and this is not intentional by myself. It'll be fixed in another commit.",
"Hey @patrickvonplaten and @LysandreJik. I've added some `FutureWarning` into the code to handle cases when only `head_mask` is passed by a user. Also, I fixed a cross-attention issue noted by @talkhaldi.\r\nI believe, the PR is now ready for review as all the checks have passed after the rebasing."
] | 1,610 | 1,611 | 1,611 | CONTRIBUTOR | null | ### Fix issue #9632
<hr>
This PR separates `head_mask` and `decoder_head_mask` for T5 models, and thus enables to specify different head masks for an encoder and decoder.
**Description:**
- Replace a single input argument `head_mask` with a separated couple `head_mask` and `decoder_head_mask` for the T5 models: `T5Model, T5ForConditionalGeneration, TFT5Model, TFT5ForConditionalGeneration`
- Slightly change the order of input arguments to follow the convention of first 7 arguments introduced in PR #9569 for BART-based models, i.e. `"input_ids", "attention_mask", "decoder_input_ids", "decoder_attention_mask", "head_mask", "decoder_head_mask", "encoder_outputs"`
- Currently, the updated PyTorch T5 model does not pass `test_forward_signature` in `tests/test_modeling_common.py`. This problem will be diminished once PR #9569 to be merged.
Reviewer: @patrickvonplaten (the code is ready for review) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9634/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9634/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9634",
"html_url": "https://github.com/huggingface/transformers/pull/9634",
"diff_url": "https://github.com/huggingface/transformers/pull/9634.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9634.patch",
"merged_at": 1611093025000
} |
https://api.github.com/repos/huggingface/transformers/issues/9633 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9633/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9633/comments | https://api.github.com/repos/huggingface/transformers/issues/9633/events | https://github.com/huggingface/transformers/issues/9633 | 787,442,675 | MDU6SXNzdWU3ODc0NDI2NzU= | 9,633 | Wrong offsets_mapping in T5TokenizerFast | {
"login": "zorikg",
"id": 37661625,
"node_id": "MDQ6VXNlcjM3NjYxNjI1",
"avatar_url": "https://avatars.githubusercontent.com/u/37661625?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zorikg",
"html_url": "https://github.com/zorikg",
"followers_url": "https://api.github.com/users/zorikg/followers",
"following_url": "https://api.github.com/users/zorikg/following{/other_user}",
"gists_url": "https://api.github.com/users/zorikg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zorikg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zorikg/subscriptions",
"organizations_url": "https://api.github.com/users/zorikg/orgs",
"repos_url": "https://api.github.com/users/zorikg/repos",
"events_url": "https://api.github.com/users/zorikg/events{/privacy}",
"received_events_url": "https://api.github.com/users/zorikg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@patrickvonplaten @n1t0 do you have any advice on this? The T5 tokenizer tokenizes the sentence as follows:\r\n```\r\n['▁This', '▁is', '▁', 'a', '▁test', '▁sentence']\r\n```\r\n\r\nUnfortunately the offset mapping point to both '▁' and 'a' being at `(8, 9)`, as the following suggests:\r\n```\r\n'offset_mapping': [(0, 4), (5, 7), (8, 9), (8, 9), (10, 14), (15, 23), (0, 0)]\r\n ^---- & ^---- here \r\n```\r\n\r\nHow should one map this encoding back to the initial sequence?",
"@patrickvonplaten @n1t0 - did you have a chance to look at this?\r\nThanks!",
"Hi @zorikg! Thank you for reporting this issue. This is related to https://github.com/huggingface/transformers/issues/9637 concerning the offset mappings bug.\r\n\r\nThe fix for this bug is tricky to deploy, but we are working on it, and I expect it to be available in the coming weeks.",
"Thanks @n1t0, I wondered if there have been any progress on this? Any expectation for when the fix will be avail? Thanks!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"@zorikg Using the last few versions of `transformers`, you can instantiate your tokenizer as follow:\r\n```python\r\ntokenizer = T5TokenizerFast.from_pretrained('google/t5-v1_1-base', from_slow=True)\r\n```\r\n\r\nThis will force the conversion from the slow tokenizer, thus using the fixed version.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"I am getting some difference between these 2 tokenizers is this solved?"
] | 1,610 | 1,638 | 1,621 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.1
- Platform: Linux-4.9.0-14-amd64-x86_64-with-debian-9.13
- Python version: 3.6.10
- PyTorch version (GPU?): 1.7.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help @patrickvonplaten, @mfuntowicz
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
ray/raytune: @richardliaw @amogkam
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using: T5
## To reproduce
See comments in the code snippet.
```python
from transformers import T5TokenizerFast
def test_offset_mapping():
"""This test fails and therefore we know that there is a bug in offset_mapping mechanism.
We try to tokenize the sentence 'This is a test sentence' and notice to issues:
1. The tokenizer tokenizes it to ['This', 'is', '', 'a', 'test', 'sentence']
which means that it has redundant empty string in position 2.
2. The offset mapping maps to ['This', 'is', 'a', 'a', 'test', 'sentence']
replacing the empty string with redundant 'a'.
"""
tokenizer = T5TokenizerFast.from_pretrained('google/t5-v1_1-base')
s = "This is a test sentence"
tokenized = tokenizer(s, return_offsets_mapping=True)
decoded_tokens, tokens_from_offset_mapping = [], []
for token_index, offset_mapping in enumerate(tokenized['offset_mapping']):
decoded_token = tokenizer.decode(tokenized['input_ids'][token_index])
if decoded_token != tokenizer.eos_token:
decoded_tokens.append(decoded_token)
tokens_from_offset_mapping.append(s[offset_mapping[0]:offset_mapping[1]])
error_msg = f"Wrong offset mapping for '{s}'! \n" \
f"Maps to: {tokens_from_offset_mapping}\n" \
f"Instead of: {decoded_tokens}"
assert decoded_tokens == tokens_from_offset_mapping, error_msg
if __name__ == "__main__":
test_offset_mapping()
```
## Expected behavior
```
AssertionError: Wrong offset mapping for 'This is a test sentence'!
Maps to: ['This', 'is', 'a', 'a', 'test', 'sentence']
Instead of: ['This', 'is', '', 'a', 'test', 'sentence']
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9633/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9633/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9632 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9632/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9632/comments | https://api.github.com/repos/huggingface/transformers/issues/9632/events | https://github.com/huggingface/transformers/issues/9632 | 787,430,124 | MDU6SXNzdWU3ODc0MzAxMjQ= | 9,632 | Missing argument: decoder_head_mask for T5 | {
"login": "stancld",
"id": 46073029,
"node_id": "MDQ6VXNlcjQ2MDczMDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/46073029?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stancld",
"html_url": "https://github.com/stancld",
"followers_url": "https://api.github.com/users/stancld/followers",
"following_url": "https://api.github.com/users/stancld/following{/other_user}",
"gists_url": "https://api.github.com/users/stancld/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stancld/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stancld/subscriptions",
"organizations_url": "https://api.github.com/users/stancld/orgs",
"repos_url": "https://api.github.com/users/stancld/repos",
"events_url": "https://api.github.com/users/stancld/events{/privacy}",
"received_events_url": "https://api.github.com/users/stancld/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Solved in #9634."
] | 1,610 | 1,611 | 1,611 | CONTRIBUTOR | null | # 🚀 Feature request
Despite the encoder-decoder architecture of T5, the models use a single `head_mask` argument instead of having separate `head_mask` and `decoder_head_mask` as it will be for BART-based models after merging the PR #9569.
## Your contribution
I'm going to send a PR soon. (I'll try to prepare this feature both for PyTorch and TensorFlow in two separate PRs.)
## Reviewer
@patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9632/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9632/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9631 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9631/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9631/comments | https://api.github.com/repos/huggingface/transformers/issues/9631/events | https://github.com/huggingface/transformers/issues/9631 | 787,341,357 | MDU6SXNzdWU3ODczNDEzNTc= | 9,631 | ImportError: cannot import name 'Dataset' | {
"login": "nakarin",
"id": 5127052,
"node_id": "MDQ6VXNlcjUxMjcwNTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5127052?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nakarin",
"html_url": "https://github.com/nakarin",
"followers_url": "https://api.github.com/users/nakarin/followers",
"following_url": "https://api.github.com/users/nakarin/following{/other_user}",
"gists_url": "https://api.github.com/users/nakarin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nakarin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nakarin/subscriptions",
"organizations_url": "https://api.github.com/users/nakarin/orgs",
"repos_url": "https://api.github.com/users/nakarin/repos",
"events_url": "https://api.github.com/users/nakarin/events{/privacy}",
"received_events_url": "https://api.github.com/users/nakarin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"How did you install Transformers and Datasets? Could you post your `pip list` here?",
"> How did you install Transformers and Datasets? Could you post your `pip list` here?\r\n\r\ntransformers (4.2.1)\r\ndatasets (1.2.1)\r\n\r\nSorry, for reply late.",
"Hmmm, I cannot seem to be able to reproduce your issue. When I install transformers and datasets, I can import `Dataset`, and I don't get a crash like you have. \r\n\r\nCan you open a colab notebook that reproduces it?",
"I try to downgrade datasets to version 1.2.0 and import transformers, it seems no problem, then I upgrade datasets to 1.2.1 again and try to use transformers it works like a charm. \r\n\r\n>>> import transformers\r\n>>> import datasets\r\n>>> import simpletransformers\r\n>>> transformers.__version__\r\n'4.2.0'\r\n>>> dtasets.__version__\r\n'1.2.1'\r\n>>>\r\n\r\nThank you for your time.\r\nNakarin",
"Fantastic, great that you got it to work! Closing this for now, feel free to re-open if you face the issue again.",
"This issue still persists even after trying above methods. ",
"ImportError: cannot import name 'DatasetInfo' from 'huggingface_hub.hf_api' \r\nI occured the same issue when I am trying to import keyBERT package, and my `pip list` is as follow:\r\nkeybert : 0.5.0\r\ntransformers : 4.15.0 \r\n",
"Could you try upgrading `huggingface_hub` to the latest version?\r\n\r\n```\r\npip install -U huggingface_hub\r\n```",
"Upgrading both Transformer and huggingface_hub worked for me. <br>\r\n```\r\npip install -U transformers\r\npip install -U huggingface_hub\r\n```\r\n",
"I am facing a similar issue, but it is not fixed by any of the above. Specifically, I am using this space: https://huggingface.co/spaces/ncoop57/cardify/tree/main\r\n\r\nAnd it encounters the following runtime error:\r\n\r\n```\r\n/home/user/.local/lib/python3.8/site-packages/paramiko/transport.py:236: CryptographyDeprecationWarning: Blowfish has been deprecated\r\n \"class\": algorithms.Blowfish,\r\nTraceback (most recent call last):\r\n File \"app.py\", line 5, in <module>\r\n from autocards.autocards import Autocards\r\n File \"/home/user/.local/lib/python3.8/site-packages/autocards/autocards.py\", line 1, in <module>\r\n from autocards.pipelines import qg_pipeline\r\n File \"/home/user/.local/lib/python3.8/site-packages/autocards/pipelines.py\", line 10, in <module>\r\n from transformers import(\r\n File \"/home/user/.local/lib/python3.8/site-packages/transformers/__init__.py\", line 2709, in __getattr__\r\n return super().__getattr__(name)\r\n File \"/home/user/.local/lib/python3.8/site-packages/transformers/file_utils.py\", line 1822, in __getattr__\r\n value = getattr(module, name)\r\n File \"/home/user/.local/lib/python3.8/site-packages/transformers/file_utils.py\", line 1821, in __getattr__\r\n module = self._get_module(self._class_to_module[name])\r\n File \"/home/user/.local/lib/python3.8/site-packages/transformers/models/auto/__init__.py\", line 202, in _get_module\r\n return importlib.import_module(\".\" + module_name, self.__name__)\r\n File \"/usr/local/lib/python3.8/importlib/__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"/home/user/.local/lib/python3.8/site-packages/transformers/models/auto/modeling_auto.py\", line 221, in <module>\r\n from ..rag.modeling_rag import ( # noqa: F401 - need to import all RagModels to be in globals() function\r\n File \"/home/user/.local/lib/python3.8/site-packages/transformers/models/rag/modeling_rag.py\", line 29, in <module>\r\n from .retrieval_rag import RagRetriever\r\n File \"/home/user/.local/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py\", line 32, in <module>\r\n from datasets import Dataset, load_dataset, load_from_disk\r\n File \"/home/user/.local/lib/python3.8/site-packages/datasets/__init__.py\", line 37, in <module>\r\n from .arrow_dataset import Dataset, concatenate_datasets\r\n File \"/home/user/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 61, in <module>\r\n from .arrow_writer import ArrowWriter, OptimizedTypedSequence\r\n File \"/home/user/.local/lib/python3.8/site-packages/datasets/arrow_writer.py\", line 26, in <module>\r\n from .features import Features, Image, Value\r\n File \"/home/user/.local/lib/python3.8/site-packages/datasets/features/__init__.py\", line 17, in <module>\r\n from .audio import Audio\r\n File \"/home/user/.local/lib/python3.8/site-packages/datasets/features/audio.py\", line 12, in <module>\r\n from ..utils.streaming_download_manager import xopen\r\n File \"/home/user/.local/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py\", line 19, in <module>\r\n from ..filesystems import COMPRESSION_FILESYSTEMS\r\n File \"/home/user/.local/lib/python3.8/site-packages/datasets/filesystems/__init__.py\", line 7, in <module>\r\n from .hffilesystem import HfFileSystem\r\n File \"/home/user/.local/lib/python3.8/site-packages/datasets/filesystems/hffilesystem.py\", line 6, in <module>\r\n from huggingface_hub.hf_api import DatasetInfo\r\nImportError: cannot import name 'DatasetInfo' from 'huggingface_hub.hf_api' (/home/user/.local/lib/python3.8/site-packages/huggingface_hub/hf_api.py)\r\n```\r\nI am using the following versions:\r\n\r\n`huggingface_hub == 0.6.0` and `transformers == 4.19.1`\r\n\r\nAny help would be greatly appreciated!\r\n\r\n@LysandreJik ",
"Fixed my issue by using `huggingface_hub == 0.5.0`",
"@LysandreJik I also need help right now..\r\n\r\nI have also encountered the error \"ImportError: cannot import name 'DatasetInfo' from 'huggingface_hub.hf_api'\".\r\nAnd no matter what versions of the transformers package and huggingface_hub package I have installed or updated to or degraded to, this error still exists...\r\n\r\nAfter several rounds of uninstalling and reinstalling, the reported error altered from \"ImportError: cannot import name 'DatasetInfo' from 'huggingface_hub.hf_api (C:\\Users\\Admin\\anaconda3\\lib\\site-packages\\huggingface_hub\\hf_api.py)\" to \"ImportError: cannot import name 'model_info' from 'huggingface_hub' (C:\\Users\\Admin\\anaconda3\\lib\\site-packages\\huggingface_hub\\__init__.py)\"...\r\nBelow are the reported error information. I am now using transformers==4.20.1 and huggingface_hub==0.8.1\r\n\r\n> ---------------------------------------------------------------------------\r\nImportError Traceback (most recent call last)\r\n~\\AppData\\Local\\Temp/ipykernel_28460/2252195315.py in <module>\r\n 1 import torch\r\n----> 2 from transformers import AutoTokenizer, AutoModelForSequenceClassification\r\n 3 \r\n 4 checkpoint = r\"C:\\Users\\Admin\\Desktop\\nlp\\bert-tiny\"\r\n 5 tokenizer = AutoTokenizer.from_pretrained(checkpoint)\r\n\r\n~\\anaconda3\\lib\\site-packages\\transformers\\__init__.py in __getattr__(self, name)\r\n 2939 Wav2Vec2Config,\r\n 2940 Wav2Vec2CTCTokenizer,\r\n-> 2941 Wav2Vec2FeatureExtractor,\r\n 2942 Wav2Vec2Processor,\r\n 2943 Wav2Vec2Tokenizer,\r\n\r\n~\\anaconda3\\lib\\site-packages\\transformers\\file_utils.py in __getattr__(self, name)\r\n\r\n~\\anaconda3\\lib\\site-packages\\transformers\\file_utils.py in __getattr__(self, name)\r\n\r\n~\\anaconda3\\lib\\site-packages\\transformers\\models\\auto\\__init__.py in _get_module(self, module_name)\r\n 208 MODEL_MAPPING,\r\n 209 MODEL_WITH_LM_HEAD_MAPPING,\r\n--> 210 AutoModel,\r\n 211 AutoModelForAudioClassification,\r\n 212 AutoModelForAudioFrameClassification,\r\n\r\n~\\anaconda3\\lib\\importlib\\__init__.py in import_module(name, package)\r\n 125 break\r\n 126 level += 1\r\n--> 127 return _bootstrap._gcd_import(name[level:], package, level)\r\n 128 \r\n 129 \r\n\r\n~\\anaconda3\\lib\\site-packages\\transformers\\models\\auto\\modeling_auto.py in <module>\r\n 19 \r\n 20 from ...utils import logging\r\n---> 21 from .auto_factory import _BaseAutoModelClass, _LazyAutoMapping, auto_class_update\r\n 22 from .configuration_auto import CONFIG_MAPPING_NAMES\r\n 23 \r\n\r\n~\\anaconda3\\lib\\site-packages\\transformers\\models\\auto\\auto_factory.py in <module>\r\n 18 \r\n 19 from ...configuration_utils import PretrainedConfig\r\n---> 20 from ...dynamic_module_utils import get_class_from_dynamic_module\r\n 21 from ...utils import copy_func, logging\r\n 22 from .configuration_auto import AutoConfig, model_type_to_module_name, replace_list_option_in_docstrings\r\n\r\n~\\anaconda3\\lib\\site-packages\\transformers\\dynamic_module_utils.py in <module>\r\n 23 from typing import Dict, Optional, Union\r\n 24 \r\n---> 25 from huggingface_hub import HfFolder, model_info\r\n 26 \r\n 27 from .utils import (\r\n\r\nImportError: cannot import name 'model_info' from 'huggingface_hub' (C:\\Users\\Admin\\anaconda3\\lib\\site-packages\\huggingface_hub\\__init__.py)"
] | 1,610 | 1,656 | 1,611 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.1, datasets : 1.2.1
- Platform: Linux AI-LAB 5.3.0-42-generic #34~18.04.1-Ubuntu SMP Fri Feb 28 13:42:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
- Python version: 3.6.8
- PyTorch version (GPU?): 1.7.1
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
Anyone.
## Information
Completely install Transformers + datasets. by pip command
The problem arises when using:
When i try to import Lib:
from transformers import AutoTokenizer, AutoModel
## Error like this:
```
ImportError Traceback (most recent call last)
<ipython-input-2-c6bea6c01ce9> in <module>
----> 1 from transformers import AutoTokenizer, AutoModel
/usr/local/lib/python3.6/dist-packages/transformers/__init__.py in __getattr__(self, name)
2096 if name == "__version__":
2097 return __version__
-> 2098 return super().__getattr__(name)
2099
2100 sys.modules[__name__] = _LazyModule(__name__, _import_structure)
/usr/local/lib/python3.6/dist-packages/transformers/file_utils.py in __getattr__(self, name)
1463 elif name in self._class_to_module.keys():
1464 module = self._get_module(self._class_to_module[name])
-> 1465 value = getattr(module, name)
1466 else:
1467 raise AttributeError(f"module {self.__name__} has no attribute {name}")
/usr/local/lib/python3.6/dist-packages/transformers/file_utils.py in __getattr__(self, name)
1462 value = self._get_module(name)
1463 elif name in self._class_to_module.keys():
-> 1464 module = self._get_module(self._class_to_module[name])
1465 value = getattr(module, name)
1466 else:
/usr/local/lib/python3.6/dist-packages/transformers/models/auto/__init__.py in _get_module(self, module_name)
158
159 def _get_module(self, module_name: str):
--> 160 return importlib.import_module("." + module_name, self.__name__)
161
162 sys.modules[__name__] = _LazyModule(__name__, _import_structure)
/usr/lib/python3.6/importlib/__init__.py in import_module(name, package)
124 break
125 level += 1
--> 126 return _bootstrap._gcd_import(name[level:], package, level)
127
128
/usr/local/lib/python3.6/dist-packages/transformers/models/auto/modeling_auto.py in <module>
152 from ..pegasus.modeling_pegasus import PegasusForConditionalGeneration, PegasusModel
153 from ..prophetnet.modeling_prophetnet import ProphetNetForCausalLM, ProphetNetForConditionalGeneration, ProphetNetModel
--> 154 from ..rag.modeling_rag import ( # noqa: F401 - need to import all RagModels to be in globals() function
155 RagModel,
156 RagSequenceForGeneration,
/usr/local/lib/python3.6/dist-packages/transformers/models/rag/modeling_rag.py in <module>
27 from ...utils import logging
28 from .configuration_rag import RagConfig
---> 29 from .retrieval_rag import RagRetriever
30
31
/usr/local/lib/python3.6/dist-packages/transformers/models/rag/retrieval_rag.py in <module>
37
38 if is_datasets_available():
---> 39 from datasets import Dataset, load_dataset, load_from_disk
40
41 if is_faiss_available():
ImportError: cannot import name 'Dataset'
```
Thanks
Nakarin | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9631/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9631/timeline | completed | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.