url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/2012 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2012/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2012/comments | https://api.github.com/repos/huggingface/transformers/issues/2012/events | https://github.com/huggingface/transformers/issues/2012 | 530,697,350 | MDU6SXNzdWU1MzA2OTczNTA= | 2,012 | How to output the vectors of the last four layers of BERT_Model. | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"TF or pytorch?\r\n\r\nIf what you want is TF, you can check #1936."
] | 1,575 | 1,575 | 1,575 | NONE | null | E.g
output = [the_last_one_layer_output, second_last_layer_output, ...] | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2012/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/2012/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2011 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2011/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2011/comments | https://api.github.com/repos/huggingface/transformers/issues/2011/events | https://github.com/huggingface/transformers/pull/2011 | 530,695,368 | MDExOlB1bGxSZXF1ZXN0MzQ3MzA1NTMy | 2,011 | typo fix on the docs as per Pytorch v1.1+ | {
"login": "AdityaSoni19031997",
"id": 22738086,
"node_id": "MDQ6VXNlcjIyNzM4MDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/22738086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AdityaSoni19031997",
"html_url": "https://github.com/AdityaSoni19031997",
"followers_url": "https://api.github.com/users/AdityaSoni19031997/followers",
"following_url": "https://api.github.com/users/AdityaSoni19031997/following{/other_user}",
"gists_url": "https://api.github.com/users/AdityaSoni19031997/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AdityaSoni19031997/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AdityaSoni19031997/subscriptions",
"organizations_url": "https://api.github.com/users/AdityaSoni19031997/orgs",
"repos_url": "https://api.github.com/users/AdityaSoni19031997/repos",
"events_url": "https://api.github.com/users/AdityaSoni19031997/events{/privacy}",
"received_events_url": "https://api.github.com/users/AdityaSoni19031997/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2011?src=pr&el=h1) Report\n> Merging [#2011](https://codecov.io/gh/huggingface/transformers/pull/2011?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b0ee7c7df3d49a819c4d6cef977214bd91f5c075?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2011?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2011 +/- ##\n=======================================\n Coverage 84.05% 84.05% \n=======================================\n Files 105 105 \n Lines 15555 15555 \n=======================================\n Hits 13075 13075 \n Misses 2480 2480\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2011?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2011?src=pr&el=footer). Last update [b0ee7c7...c356290](https://codecov.io/gh/huggingface/transformers/pull/2011?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks!",
"Thanks a lot to the creators (and contributors) of this amazing lib for making our lives easier!"
] | 1,575 | 1,575 | 1,575 | CONTRIBUTOR | null | https://github.com/huggingface/transformers/issues/2010 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2011/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2011/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2011",
"html_url": "https://github.com/huggingface/transformers/pull/2011",
"diff_url": "https://github.com/huggingface/transformers/pull/2011.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2011.patch",
"merged_at": 1575545945000
} |
https://api.github.com/repos/huggingface/transformers/issues/2010 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2010/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2010/comments | https://api.github.com/repos/huggingface/transformers/issues/2010/events | https://github.com/huggingface/transformers/issues/2010 | 530,694,983 | MDU6SXNzdWU1MzA2OTQ5ODM= | 2,010 | Changing the docs as per Pytorch v1.1+ | {
"login": "AdityaSoni19031997",
"id": 22738086,
"node_id": "MDQ6VXNlcjIyNzM4MDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/22738086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AdityaSoni19031997",
"html_url": "https://github.com/AdityaSoni19031997",
"followers_url": "https://api.github.com/users/AdityaSoni19031997/followers",
"following_url": "https://api.github.com/users/AdityaSoni19031997/following{/other_user}",
"gists_url": "https://api.github.com/users/AdityaSoni19031997/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AdityaSoni19031997/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AdityaSoni19031997/subscriptions",
"organizations_url": "https://api.github.com/users/AdityaSoni19031997/orgs",
"repos_url": "https://api.github.com/users/AdityaSoni19031997/repos",
"events_url": "https://api.github.com/users/AdityaSoni19031997/events{/privacy}",
"received_events_url": "https://api.github.com/users/AdityaSoni19031997/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,575 | 1,580 | 1,580 | CONTRIBUTOR | null | ## ❓ Questions & Help
[Docs Link](https://huggingface.co/transformers/migration.html#optimizers-bertadam-openaiadam-are-now-adamw-schedules-are-standard-pytorch-schedules)
```
# From the Docs
### In Transformers, optimizer and schedules are splitted and instantiated like this:
optimizer = AdamW(model.parameters(), lr=lr, correct_bias=False) # To reproduce BertAdam specific behavior set correct_bias=False
scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=num_warmup_steps, num_training_steps=num_training_steps) # PyTorch scheduler
### and used like this:
for batch in train_data:
loss = model(batch)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), max_grad_norm) # Gradient clipping is not in AdamW anymore (so you can use amp without issue)
scheduler.step()
optimizer.step()
```
As per the Pytorch 1.1+,
>Prior to PyTorch 1.1.0, the learning rate scheduler was expected to be called before the optimizer’s update; 1.1.0 changed this behavior in a BC-breaking way. If you use the learning rate scheduler (calling scheduler.step()) before the optimizer’s update (calling optimizer.step()), this will skip the first value of the learning rate schedule. If you are unable to reproduce results after upgrading to PyTorch 1.1.0, please check if you are calling scheduler.step() at the wrong time.
[Pytorch Reference Link](https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate)
Thanks.
PS Not sure if the issue category selected is apt. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2010/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2010/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2009 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2009/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2009/comments | https://api.github.com/repos/huggingface/transformers/issues/2009/events | https://github.com/huggingface/transformers/issues/2009 | 530,675,557 | MDU6SXNzdWU1MzA2NzU1NTc= | 2,009 | Reason for using einsum in xlnet? | {
"login": "SungMinCho",
"id": 8216334,
"node_id": "MDQ6VXNlcjgyMTYzMzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/8216334?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SungMinCho",
"html_url": "https://github.com/SungMinCho",
"followers_url": "https://api.github.com/users/SungMinCho/followers",
"following_url": "https://api.github.com/users/SungMinCho/following{/other_user}",
"gists_url": "https://api.github.com/users/SungMinCho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SungMinCho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SungMinCho/subscriptions",
"organizations_url": "https://api.github.com/users/SungMinCho/orgs",
"repos_url": "https://api.github.com/users/SungMinCho/repos",
"events_url": "https://api.github.com/users/SungMinCho/events{/privacy}",
"received_events_url": "https://api.github.com/users/SungMinCho/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,575 | 1,580 | 1,580 | NONE | null | ## ❓ Questions & Help
Hello.
This might be a newbie question, so I apologize in advance.
While reading your implementation of xlnet, I ran into several usages of `torch.einsum` function.
example) `k_head_h = torch.einsum('ibh,hnd->ibnd', cat, self.k) `
After studying the definition of einsum, I came to a conclusion that the above statement is exactly like using a linear layer (without bias) (from dimension h to n*d), and then resizing the output to be ibnd.
So if I'm not wrong, is there any reason to prefer using `torch.einsum` over `nn.Linear`?
Is it related to performance issues?
I ran a simple test, and `nn.Linear` seems to be a bit faster than `torch.einsum`.
I would really appreciate your help.
Thank you. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2009/reactions",
"total_count": 6,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
} | https://api.github.com/repos/huggingface/transformers/issues/2009/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2008 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2008/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2008/comments | https://api.github.com/repos/huggingface/transformers/issues/2008/events | https://github.com/huggingface/transformers/issues/2008 | 530,636,971 | MDU6SXNzdWU1MzA2MzY5NzE= | 2,008 | Expand run_lm_finetuning.py to all models | {
"login": "iedmrc",
"id": 13666448,
"node_id": "MDQ6VXNlcjEzNjY2NDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/13666448?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iedmrc",
"html_url": "https://github.com/iedmrc",
"followers_url": "https://api.github.com/users/iedmrc/followers",
"following_url": "https://api.github.com/users/iedmrc/following{/other_user}",
"gists_url": "https://api.github.com/users/iedmrc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iedmrc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iedmrc/subscriptions",
"organizations_url": "https://api.github.com/users/iedmrc/orgs",
"repos_url": "https://api.github.com/users/iedmrc/repos",
"events_url": "https://api.github.com/users/iedmrc/events{/privacy}",
"received_events_url": "https://api.github.com/users/iedmrc/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Indeed, here are my 2 cents on that:\r\n- ctrl: easy to add (should work out of the box)\r\n- xlm: should also work out of the box (but need to check if the model is an mlm or a clm model to finetune)\r\n- albert: should work out of the box\r\n- transfo-xl: need to take care of history => a little more work\r\n- xlnet: need to take care of history + permutations => quite more work.\r\n\r\nDo you want to give it a try? We don't have that in our short term roadmap until the end of the year. ",
"Okay, I'm gonna try to add `ctrl`, `xlm` and `albert`. Then I'll make pull request in order to discuss on it. \r\n\r\nIsn't there any example of how to train `transfo-xl` and `xlnet`?",
"You have to look at both original repos",
"Out of curiosity, has any progress been made on a pull request for this?",
"+1 for this request, especially `transfo-xl` :)",
"Is this issue addressed with https://github.com/huggingface/transformers/commit/a8e3336a850e856188350a93e67d77c07c85b8af?",
"a8e3336a850e856188350a93e67d77c07c85b8af makes all those models accessible from `run_language_modeling.py`, but does not do anything special for models whose training has peculiarities, like `transfo-xl` or `xlnet`. I'm not familiar with those two so maybe someone else (@patrickvonplaten?) can chime in.",
"As far as I know: \r\n\r\nCurrently the `lun_language_modeling.py` script is not really made to train `transfo-xl` or `xlnet`\r\n\r\nFirst as @thomwolf already said, the `mems` parameter (the \"history\") of the models is not taken care of during training. During training the model \"caches\" past sequences to effectively reuse them afterwards. It's described quite well in Figure 2 in the [Transfo-XL paper](https://arxiv.org/pdf/1901.02860.pdf). This should be rather easy to add though. \r\n\r\nSecond, `XLNet` samples from a permutation mask during training, which is one of the core ideas of the paper, see https://github.com/huggingface/transformers/issues/2822 or Equation 5 in the official [paper](https://arxiv.org/pdf/1906.08237.pdf) This is a very special case for `XLNet` and is not yet implemented in `run_language_modeling.py` (shouldn't be too hard though to implement it since there is only one additional sum per training sample). \r\n\r\nThird, `Transfo-XL` uses adaptive word embeddings and adaptive softmax which also leads to some specialties when training. See also this issue #3310. This should be implemented in the model class itself though. ",
"I'm assuming that `Albert` is fine out of the box. What about `T5`?",
"Is anybody still working on this currently?",
"We are currently working on it. Might still take ~2 weeks.",
"Any update?",
"I'd like to try this (#4739). I'd like to start with XLNet since that's relevant to my work right now.",
"I think you would just need to add a XLNet data collator to this file so that the trainer can be used with XLNet :-) So I would add a new XLNetLanguageModelingCollator here: https://github.com/huggingface/transformers/blob/1b5820a56540a2096daeb43a0cd8247c8c94a719/src/transformers/data/data_collator.py#L76",
"Thanks so much! I'll look into it :)",
"Any progress on XLNet? @shngt",
"Any updates regarding XLNet ?",
"@patrickvonplaten I added the data collator as you suggested - please review :) You also mentioned earlier \"the `mems` parameter (the \"history\") of the models is not taken care of during training\" - has that been taken care of, or does the logic need to be implemented separately?",
"I was looking into the other models requested:\r\n\r\n- CTRL -> CLM, works out of the box, already added comments\r\n- XLM -> can be trained with three different objectives - CLM, MLM and Translation LM, which is a supervised multilingual extension of MLM. The example script does not seem to require any changes (except for maybe a warning somewhere to use the right flag with the right checkpoint?). TLM does require a lot of data-specific preprocessing, but it seems relevant only in light of the multilingual setting. I feel it would be better to incorporate those in a separate `mulitlingual_language_modeling` example script if others would like an end-to-end example of how this would be properly done.\r\n- Albert -> Instead of the random masking in BERT, the authors use a span-based masking system first seen in SpanBERT (section 3.1 of https://arxiv.org/pdf/1907.10529.pdf). It seems to be a mix of what I implemented in XLNet and the masking procedure in BERT, so should be kept in another function in the main `DataCollatorForLanguageModeling` class in my opinion\r\n- TransformerXL -> seems to be CLM with reuse of previous states. I think this functionality has been added, so no additional work should be needed\r\n\r\nIn summary, I think all that needs to be done right now for XLM and TransformerXL is to add a line or two in the starting docstring mentioning which type of LM to use. For Albert, I think we need to incorporate the masking scheme as a separate procedure in `DataCollatorForLanguageModeling`, but am not sure if this is the cleanest way to do it. Let me know what you would like.\r\n\r\n@patrickvonplaten ",
"I agree very much with what you say. For `XLM` and `TransformerXL` the script should work pretty much out of the box, so we would just have to adapt some comments in `examples/language-modeling/run_language_modeling.py`.\r\n\r\nFor Albert, it would be nice to create a new `SpanMaskLanguageModeling` Data collator.",
"Great, I'll get started then. I'll try to finish it over the weekend :)",
"Awesome, no rush though ;-)",
"Maybe a stupid question, but where should I find `run_lm_finetuning.py`? [The docs](https://huggingface.co/transformers/v2.0.0/examples.html) point to a dead link, as the file doesn't exist in the master branch. ",
"it's renamed and moved [there](https://github.com/huggingface/transformers/tree/master/examples/language-modeling).",
"Thanks for the notice @KristenMoore - The documentation was quite old. The new documentation should have fixed it :-) ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,575 | 1,604 | 1,604 | CONTRIBUTOR | null | ## 🚀 Feature
[run_lm_finetuning.py](https://github.com/huggingface/transformers/blob/b0ee7c7df3d49a819c4d6cef977214bd91f5c075/examples/run_lm_finetuning.py) is a very useful tool for finetuning many models the library provided. But it doesn't cover all the models. Currently available models are:
- gpt2
- openai-gpt
- bert
- roberta
- distilbert
- camembert
And not available ones:
- ctrl
- xlm
- xlnet
- transfo-xl
- albert
## Motivation
Most important part of such a library is that it can be easily finetuned. `run_lm_finetuning.py` gives us that opportunity but why say no more :)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2008/reactions",
"total_count": 10,
"+1": 10,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2008/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2007 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2007/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2007/comments | https://api.github.com/repos/huggingface/transformers/issues/2007/events | https://github.com/huggingface/transformers/pull/2007 | 530,590,304 | MDExOlB1bGxSZXF1ZXN0MzQ3MjMxODQw | 2,007 | fixed XLNet attention output for both attention streams whenever target_mapping is provided | {
"login": "roskoN",
"id": 8143425,
"node_id": "MDQ6VXNlcjgxNDM0MjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8143425?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/roskoN",
"html_url": "https://github.com/roskoN",
"followers_url": "https://api.github.com/users/roskoN/followers",
"following_url": "https://api.github.com/users/roskoN/following{/other_user}",
"gists_url": "https://api.github.com/users/roskoN/gists{/gist_id}",
"starred_url": "https://api.github.com/users/roskoN/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/roskoN/subscriptions",
"organizations_url": "https://api.github.com/users/roskoN/orgs",
"repos_url": "https://api.github.com/users/roskoN/repos",
"events_url": "https://api.github.com/users/roskoN/events{/privacy}",
"received_events_url": "https://api.github.com/users/roskoN/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2007?src=pr&el=h1) Report\n> Merging [#2007](https://codecov.io/gh/huggingface/transformers/pull/2007?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b0ee7c7df3d49a819c4d6cef977214bd91f5c075?src=pr&el=desc) will **increase** coverage by `0.03%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2007?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2007 +/- ##\n==========================================\n+ Coverage 84.05% 84.09% +0.03% \n==========================================\n Files 105 105 \n Lines 15555 15570 +15 \n==========================================\n+ Hits 13075 13093 +18 \n+ Misses 2480 2477 -3\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2007?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/tests/modeling\\_xlnet\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/2007/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3hsbmV0X3Rlc3QucHk=) | `96.42% <100%> (+0.29%)` | :arrow_up: |\n| [transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2007/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3hsbmV0LnB5) | `74.22% <100%> (+0.61%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2007?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2007?src=pr&el=footer). Last update [b0ee7c7...76c0bc0](https://codecov.io/gh/huggingface/transformers/pull/2007?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"That's great, thanks for fixing the issue! Looks good to me.",
"Yes, this is great, thanks a lot @roskoN!"
] | 1,575 | 1,575 | 1,575 | CONTRIBUTOR | null | XLNet uses two separate attention streams, i.e. there are two separate tensors for representing the model's attention. Both of them need to have their dimensions permuted.
The problem has been described in #1994 . | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2007/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2007/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2007",
"html_url": "https://github.com/huggingface/transformers/pull/2007",
"diff_url": "https://github.com/huggingface/transformers/pull/2007.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2007.patch",
"merged_at": 1575545560000
} |
https://api.github.com/repos/huggingface/transformers/issues/2006 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2006/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2006/comments | https://api.github.com/repos/huggingface/transformers/issues/2006/events | https://github.com/huggingface/transformers/issues/2006 | 530,584,208 | MDU6SXNzdWU1MzA1ODQyMDg= | 2,006 | [ALBERT]: 'AlbertForMaskedLM' object has no attribute 'bias' | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Same issue here. I did slightly different steps, but same result.\r\n\r\n```\r\nmodel = AlbertModel(config=config)\r\nmodel = load_tf_weights_in_albert(model,config,'sample_tf_checkpoint/model.ckpt-100000')\r\n```\r\nThen I get,\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\n<ipython-input-5-a47f5e7bff26> in <module>\r\n----> 1 model = load_tf_weights_in_albert(model,config,'sample_tf_checkpoint/model.ckpt-100000')\r\n\r\n~/anaconda3/envs/pytorch_py37/lib/python3.7/site-packages/transformers/modeling_albert.py in load_tf_weights_in_albert(model, config, tf_checkpoint_path)\r\n 90 pointer = getattr(pointer, 'weight')\r\n 91 elif l[0] == 'output_bias' or l[0] == 'beta':\r\n---> 92 pointer = getattr(pointer, 'bias')\r\n 93 elif l[0] == 'output_weights':\r\n 94 pointer = getattr(pointer, 'weight')\r\n\r\n~/anaconda3/envs/pytorch_py37/lib/python3.7/site-packages/torch/nn/modules/module.py in __getattr__(self, name)\r\n 589 return modules[name]\r\n 590 raise AttributeError(\"'{}' object has no attribute '{}'\".format(\r\n--> 591 type(self).__name__, name))\r\n 592 \r\n 593 def __setattr__(self, name, value):\r\n\r\nAttributeError: 'AlbertModel' object has no attribute 'bias'\r\n```\r\n\r\nMiserably waiting for the solution :( \r\nThe pretrained tensorflow checkpoints were generated using the codes in https://github.com/google-research/google-research/tree/master/albert\r\n\r\nIt seems the latest code update was 3 days ago (Nov. 27). My training was initiated after that.\r\n\r\nPlease help us.",
"Same issue here. ",
"You can Try my repo convert Albert tf to torch .py\n\nOn Mon, Dec 2, 2019 at 11:28 SunYan <[email protected]> wrote:\n\n> Same issue here.\n>\n> —\n> You are receiving this because you are subscribed to this thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/2006?email_source=notifications&email_token=AIEAE4BXOVAOQN7RGG35JHLQWR6GRA5CNFSM4JTGZEWKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEFSCWFA#issuecomment-560212756>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AIEAE4DCFSKTP2GFFB3GYDTQWR6GRANCNFSM4JTGZEWA>\n> .\n>\n",
"> You can Try my repo convert Albert tf to torch .py\r\n> […](#)\r\n> On Mon, Dec 2, 2019 at 11:28 SunYan ***@***.***> wrote: Same issue here. — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub <#2006?email_source=notifications&email_token=AIEAE4BXOVAOQN7RGG35JHLQWR6GRA5CNFSM4JTGZEWKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEFSCWFA#issuecomment-560212756>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AIEAE4DCFSKTP2GFFB3GYDTQWR6GRANCNFSM4JTGZEWA> .\r\n\r\n秀!",
"Hi, this should have been fixed with b3d834a, you can load the changes by installing from source.\r\n\r\nLet me know if you still have an error.",
"@LysandreJik Thank you for your help. I am getting a different error saying that object Embedding doesn't have 'shape'\r\n\r\nIt seems the module is expecting numpy array, while the checkpoint contains object called Embedding, thus has no attribute \"shape\"\r\n\r\nI am not sure how to correct it though.\r\n\r\nThank you again!\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\n<ipython-input-4-a47f5e7bff26> in <module>\r\n----> 1 model = load_tf_weights_in_albert(model,config,'sample_tf_checkpoint/model.ckpt-100000')\r\n\r\n~/anaconda3/envs/pytorch_py37/lib/python3.7/site-packages/transformers/modeling_albert.py in load_tf_weights_in_albert(model, config, tf_checkpoint_path)\r\n 130 array = np.transpose(array)\r\n 131 try:\r\n--> 132 assert pointer.shape == array.shape\r\n 133 except AssertionError as e:\r\n 134 e.args += (pointer.shape, array.shape)\r\n\r\n~/anaconda3/envs/pytorch_py37/lib/python3.7/site-packages/torch/nn/modules/module.py in __getattr__(self, name)\r\n 589 return modules[name]\r\n 590 raise AttributeError(\"'{}' object has no attribute '{}'\".format(\r\n--> 591 type(self).__name__, name))\r\n 592 \r\n 593 def __setattr__(self, name, value):\r\n\r\nAttributeError: 'Embedding' object has no attribute 'shape'\r\n```\r\n",
"Hi @hansaimlim, what is the size of the model you are loading? Could you paste here the 5-10 lines output by the conversion before the error was raised? ",
"I could also reproduce that error:\r\n\r\n```bash\r\nglobal_step\r\nInitialize PyTorch weight ['albert', 'embeddings', 'LayerNorm', 'beta'] from bert/embeddings/LayerNorm/beta\r\nINFO:transformers.modeling_albert:Skipping albert/embeddings/LayerNorm/beta/adam_m\r\nInitialize PyTorch weight ['albert', 'embeddings', 'LayerNorm', 'beta', 'adam_m'] from bert/embeddings/LayerNorm/beta/adam_m\r\nINFO:transformers.modeling_albert:Skipping albert/embeddings/LayerNorm/beta/adam_v\r\nInitialize PyTorch weight ['albert', 'embeddings', 'LayerNorm', 'beta', 'adam_v'] from bert/embeddings/LayerNorm/beta/adam_v\r\nInitialize PyTorch weight ['albert', 'embeddings', 'LayerNorm', 'gamma'] from bert/embeddings/LayerNorm/gamma\r\nINFO:transformers.modeling_albert:Skipping albert/embeddings/LayerNorm/gamma/adam_m\r\nInitialize PyTorch weight ['albert', 'embeddings', 'LayerNorm', 'gamma', 'adam_m'] from bert/embeddings/LayerNorm/gamma/adam_m\r\nINFO:transformers.modeling_albert:Skipping albert/embeddings/LayerNorm/gamma/adam_v\r\nInitialize PyTorch weight ['albert', 'embeddings', 'LayerNorm', 'gamma', 'adam_v'] from bert/embeddings/LayerNorm/gamma/adam_v\r\nInitialize PyTorch weight ['albert', 'embeddings', 'position_embeddings'] from bert/embeddings/position_embeddings\r\nINFO:transformers.modeling_albert:Skipping albert/embeddings/position_embeddings/adam_m\r\nTraceback (most recent call last):\r\n File \"convert_albert_original_tf_checkpoint_to_pytorch.py\", line 66, in <module>\r\n args.pytorch_dump_path)\r\n File \"convert_albert_original_tf_checkpoint_to_pytorch.py\", line 37, in convert_tf_checkpoint_to_pytorch\r\n load_tf_weights_in_albert(model, config, tf_checkpoint_path)\r\n File \"/mnt/transformers/transformers/modeling_albert.py\", line 134, in load_tf_weights_in_albert\r\n assert pointer.shape == array.shape\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py\", line 585, in __getattr__\r\n type(self).__name__, name))\r\nAttributeError: 'Embedding' object has no attribute 'shape'\r\n```",
"@LysandreJik Sure. Thanks for prompt feedback!\r\n\r\nmy_albert_config.json\r\n```\r\nattention_probs_dropout_prob:0\r\nhidden_act:\"gelu\"\r\nhidden_dropout_prob:0\r\nembedding_size:128\r\nhidden_size:312\r\ninitializer_range:0.02\r\nintermediate_size:1248\r\nmax_position_embeddings:512\r\nnum_attention_heads:12\r\nnum_hidden_layers:4\r\nnum_hidden_groups:1\r\nnet_structure_type:0\r\ngap_size:0\r\nnum_memory_blocks:0\r\ninner_group_num:1\r\ndown_scale_factor:1\r\ntype_vocab_size:2\r\nln_type:\"postln\"\r\nvocab_size:19686\r\n```\r\n\r\n```\r\nbert/embeddings/LayerNorm/beta\r\nbert/embeddings/LayerNorm/beta/adam_m\r\nbert/embeddings/LayerNorm/beta/adam_v\r\nbert/embeddings/LayerNorm/gamma\r\nbert/embeddings/LayerNorm/gamma/adam_m\r\nbert/embeddings/LayerNorm/gamma/adam_v\r\nbert/embeddings/position_embeddings\r\nbert/embeddings/position_embeddings/adam_m\r\nbert/embeddings/position_embeddings/adam_v\r\nbert/embeddings/token_type_embeddings\r\nbert/embeddings/token_type_embeddings/adam_m\r\nbert/embeddings/token_type_embeddings/adam_v\r\nbert/embeddings/word_embeddings\r\nbert/embeddings/word_embeddings/adam_m\r\nbert/embeddings/word_embeddings/adam_v\r\nbert/encoder/embedding_hidden_mapping_in/bias\r\nbert/encoder/embedding_hidden_mapping_in/bias/adam_m\r\nbert/encoder/embedding_hidden_mapping_in/bias/adam_v\r\nbert/encoder/embedding_hidden_mapping_in/kernel\r\nbert/encoder/embedding_hidden_mapping_in/kernel/adam_m\r\nbert/encoder/embedding_hidden_mapping_in/kernel/adam_v\r\nbert/encoder/transformer/group_0/inner_group_0/LayerNorm/beta\r\nbert/encoder/transformer/group_0/inner_group_0/LayerNorm/beta/adam_m\r\nbert/encoder/transformer/group_0/inner_group_0/LayerNorm/beta/adam_v\r\nbert/encoder/transformer/group_0/inner_group_0/LayerNorm/gamma\r\nbert/encoder/transformer/group_0/inner_group_0/LayerNorm/gamma/adam_m\r\nbert/encoder/transformer/group_0/inner_group_0/LayerNorm/gamma/adam_v\r\nbert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/beta\r\nbert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/beta/adam_m\r\nbert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/beta/adam_v\r\nbert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/gamma\r\nbert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/gamma/adam_m\r\nbert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/gamma/adam_v\r\nbert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias\r\nbert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias/adam_m\r\nbert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias/adam_v\r\nbert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel\r\nbert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel/adam_m\r\nbert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel/adam_v\r\nbert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias\r\nbert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias/adam_m\r\nbert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias/adam_v\r\nbert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel\r\nbert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel/adam_m\r\nbert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel/adam_v\r\nbert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias\r\nbert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias/adam_m\r\nbert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias/adam_v\r\nbert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel\r\nbert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel/adam_m\r\nbert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel/adam_v\r\nbert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias\r\nbert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias/adam_m\r\nbert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias/adam_v\r\nbert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel\r\nbert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel/adam_m\r\nbert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel/adam_v\r\nbert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias\r\nbert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias/adam_m\r\nbert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias/adam_v\r\nbert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel\r\nbert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel/adam_m\r\nbert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel/adam_v\r\nbert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias\r\nbert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias/adam_m\r\nbert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias/adam_v\r\nbert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel\r\nbert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel/adam_m\r\nbert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel/adam_v\r\nbert/pooler/dense/bias\r\nbert/pooler/dense/bias/adam_m\r\nbert/pooler/dense/bias/adam_v\r\nbert/pooler/dense/kernel\r\nbert/pooler/dense/kernel/adam_m\r\nbert/pooler/dense/kernel/adam_v\r\ncls/predictions/output_bias\r\ncls/predictions/output_bias/adam_m\r\ncls/predictions/output_bias/adam_v\r\ncls/predictions/transform/LayerNorm/beta\r\ncls/predictions/transform/LayerNorm/beta/adam_m\r\ncls/predictions/transform/LayerNorm/beta/adam_v\r\ncls/predictions/transform/LayerNorm/gamma\r\ncls/predictions/transform/LayerNorm/gamma/adam_m\r\ncls/predictions/transform/LayerNorm/gamma/adam_v\r\ncls/predictions/transform/dense/bias\r\ncls/predictions/transform/dense/bias/adam_m\r\ncls/predictions/transform/dense/bias/adam_v\r\ncls/predictions/transform/dense/kernel\r\ncls/predictions/transform/dense/kernel/adam_m\r\ncls/predictions/transform/dense/kernel/adam_v\r\ncls/seq_relationship/output_bias\r\ncls/seq_relationship/output_bias/adam_m\r\ncls/seq_relationship/output_bias/adam_v\r\ncls/seq_relationship/output_weights\r\ncls/seq_relationship/output_weights/adam_m\r\ncls/seq_relationship/output_weights/adam_v\r\nglobal_step\r\nInitialize PyTorch weight ['albert', 'embeddings', 'LayerNorm', 'beta'] from bert/embeddings/LayerNorm/beta\r\nInitialize PyTorch weight ['albert', 'embeddings', 'LayerNorm', 'beta', 'adam_m'] from bert/embeddings/LayerNorm/beta/adam_m\r\nInitialize PyTorch weight ['albert', 'embeddings', 'LayerNorm', 'beta', 'adam_v'] from bert/embeddings/LayerNorm/beta/adam_v\r\nInitialize PyTorch weight ['albert', 'embeddings', 'LayerNorm', 'gamma'] from bert/embeddings/LayerNorm/gamma\r\nInitialize PyTorch weight ['albert', 'embeddings', 'LayerNorm', 'gamma', 'adam_m'] from bert/embeddings/LayerNorm/gamma/adam_m\r\nInitialize PyTorch weight ['albert', 'embeddings', 'LayerNorm', 'gamma', 'adam_v'] from bert/embeddings/LayerNorm/gamma/adam_v\r\nInitialize PyTorch weight ['albert', 'embeddings', 'position_embeddings'] from bert/embeddings/position_embeddings\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\n<ipython-input-4-a47f5e7bff26> in <module>\r\n----> 1 model = load_tf_weights_in_albert(model,config,'sample_tf_checkpoint/model.ckpt-100000')\r\n\r\n~/anaconda3/envs/pytorch_py37/lib/python3.7/site-packages/transformers/modeling_albert.py in load_tf_weights_in_albert(model, config, tf_checkpoint_path)\r\n 130 array = np.transpose(array)\r\n 131 try:\r\n--> 132 assert pointer.shape == array.shape\r\n 133 except AssertionError as e:\r\n 134 e.args += (pointer.shape, array.shape)\r\n\r\n~/anaconda3/envs/pytorch_py37/lib/python3.7/site-packages/torch/nn/modules/module.py in __getattr__(self, name)\r\n 589 return modules[name]\r\n 590 raise AttributeError(\"'{}' object has no attribute '{}'\".format(\r\n--> 591 type(self).__name__, name))\r\n 592 \r\n 593 def __setattr__(self, name, value):\r\n\r\nAttributeError: 'Embedding' object has no attribute 'shape'\r\n```",
"Alright, I see where the issue stems from, I'm patching it and will get back to you soon.",
"Alright, please let me know if e85855f fixed it. I tested it with models saved from `run_pretraning.py` (with `AlbertForMaskedLM` as the host model) and `run_classifier_sp.py` (with `AlbertForSequenceClassifiication`) and both seem to work fine now.\r\n\r\nPlease keep in mind that we have no albert model that can do next sentence prediction so the weights from `cls/seq_relationship` are dropped. ",
"@LysandreJik \r\n\r\nWorks fine!! :)))) Thank you so much! 👍 ",
"Glad I could help!",
"Thanks @LysandreJik ! I can also confirm that the conversion script is working now :+1: ",
"Short update: I used the converted ALBERT model to perform NER. F-score was ~0.1%. I've seen this strange behaviour for v2 ALBERT models but still have no solution for that.\r\n\r\n@hansaimlim have you done some evaluations with your trained model? Would be great to know if this problem also occurs for non-NER tasks!\r\n\r\n",
"@stefan-it I'm working on drug activity prediction. In my case, I used v2 ALBERT as well, and its performance for masked LM was fine, and I haven't done downstream prediction tasks yet. Assuming you're working on human language, I believe our tasks are very different. How was it when you use BERT?",
"I used my trained model for predicting a masked token, and the model always returns `<unk>` (which is not the case for the English v1 and v2 models), so I guess I did something wrong in the pre-training steps... ",
"Dear All,\r\nI still ha ve an issue by converting an albert checkpoint to pytorch binary using this script. Here is the error:\r\n```Traceback (most recent call last):\r\n File \"$WORK/Tools/miniconda3/envs/py309/lib/python3.9/site-packages/transformers/models/albert/convert_albert_original_tf_checkpoint_to_pytorch.py\", line 63, in <module>\r\n convert_tf_checkpoint_to_pytorch(args.tf_checkpoint_path, args.albert_config_file, args.pytorch_dump_path)\r\n File \"$WORK/Tools/miniconda3/envs/py309/lib/python3.9/site-packages/transformers/models/albert/convert_albert_original_tf_checkpoint_to_pytorch.py\", line 36, in convert_tf_checkpoint_to_pytorch\r\n load_tf_weights_in_albert(model, config, tf_checkpoint_path)\r\n File \"$WORK/Tools/miniconda3/envs/py309/lib/python3.9/site-packages/transformers/models/albert/modeling_albert.py\", line 163, in load_tf_weights_in_albert\r\n pointer = getattr(pointer, \"bias\")\r\n File \"$WORK/Tools/miniconda3/envs/py309/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1269, in __getattr__\r\n raise AttributeError(\"'{}' object has no attribute '{}'\".format(\r\nAttributeError: 'AlbertEmbeddings' object has no attribute 'bias'\r\n```\r\nAny idea? \r\nUsing python 3.9\r\ntransformers 4.26.1\r\nunder linux (ubuntu)"
] | 1,575 | 1,676 | 1,575 | COLLABORATOR | null | Hi,
I wanted to convert an own trained ALBERT model with the `convert_albert_original_tf_checkpoint_to_pytorch.py` script:
```bash
$ python3 convert_albert_original_tf_checkpoint_to_pytorch.py --tf_checkpoint_path /mnt/albert-base-secrect-language-cased/ --albert_config_file /mnt/albert-base-secrect-language-cased/config.json --pytorch_dump_path pytorch_model.bin
```
Unfortunately, the following error message is returned:
```bash
<--snip-->
bert/pooler/dense/bias
bert/pooler/dense/bias/adam_m
bert/pooler/dense/bias/adam_v
bert/pooler/dense/kernel
bert/pooler/dense/kernel/adam_m
bert/pooler/dense/kernel/adam_v
cls/predictions/output_bias
cls/predictions/output_bias/adam_m
cls/predictions/output_bias/adam_v
cls/predictions/transform/LayerNorm/beta
cls/predictions/transform/LayerNorm/beta/adam_m
cls/predictions/transform/LayerNorm/beta/adam_v
cls/predictions/transform/LayerNorm/gamma
cls/predictions/transform/LayerNorm/gamma/adam_m
cls/predictions/transform/LayerNorm/gamma/adam_v
cls/predictions/transform/dense/bias
cls/predictions/transform/dense/bias/adam_m
cls/predictions/transform/dense/bias/adam_v
cls/predictions/transform/dense/kernel
cls/predictions/transform/dense/kernel/adam_m
cls/predictions/transform/dense/kernel/adam_v
cls/seq_relationship/output_bias
cls/seq_relationship/output_bias/adam_m
cls/seq_relationship/output_bias/adam_v
cls/seq_relationship/output_weights
cls/seq_relationship/output_weights/adam_m
cls/seq_relationship/output_weights/adam_v
global_step
INFO:transformers.modeling_albert:Skipping bert/embeddings/attention/LayerNorm/beta
INFO:transformers.modeling_albert:Skipping bert/embeddings/attention/LayerNorm/beta
INFO:transformers.modeling_albert:Skipping bert/embeddings/attention/LayerNorm/beta
INFO:transformers.modeling_albert:Skipping bert/embeddings/attention/LayerNorm/beta
Traceback (most recent call last):
File "convert_albert_original_tf_checkpoint_to_pytorch.py", line 66, in <module>
args.pytorch_dump_path)
File "convert_albert_original_tf_checkpoint_to_pytorch.py", line 37, in convert_tf_checkpoint_to_pytorch
load_tf_weights_in_albert(model, config, tf_checkpoint_path)
File "/mnt/transformers/transformers/modeling_albert.py", line 92, in load_tf_weights_in_albert
pointer = getattr(pointer, 'bias')
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 585, in __getattr__
type(self).__name__, name))
AttributeError: 'AlbertForMaskedLM' object has no attribute 'bias'
```
I'm using the latest commit in `google-research` for training the ALBERT model. Configuration is:
```json
{
"attention_probs_dropout_prob": 0.1,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"embedding_size": 128,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"max_position_embeddings": 512,
"num_attention_heads": 12,
"num_hidden_layers": 12,
"num_hidden_groups": 1,
"net_structure_type": 0,
"gap_size": 0,
"num_memory_blocks": 0,
"inner_group_num": 1,
"down_scale_factor": 1,
"type_vocab_size": 2,
"vocab_size": 32000
}
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2006/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2006/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2005 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2005/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2005/comments | https://api.github.com/repos/huggingface/transformers/issues/2005/events | https://github.com/huggingface/transformers/issues/2005 | 530,564,691 | MDU6SXNzdWU1MzA1NjQ2OTE= | 2,005 | tf.keras.mixed_precision.experimental.Policy | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Sorry, I created this duplicated issue as the previous one. Please delete this one, thank you."
] | 1,575 | 1,575 | 1,575 | COLLABORATOR | null | ## ❓ Questions & Help
I want to use `mixed_precision`, and I found [tf.keras.mixed_precision.experimental.Policy](https://www.tensorflow.org/api_docs/python/tf/keras/mixed_precision/experimental/Policy).
So I put `tf.keras.mixed_precision.experimental.set_policy("mixed_float16")` before `TFBertModel.from_pretrained(pretrained_weights)`. When I run the code, I got the following error:
> InvalidArgumentError: cannot compute AddV2 as input #1(zero-based) was expected to be a half tensor but is a float tensor [Op:AddV2] name: tf_bert_model_1/bert/embeddings/add/
which happened at `ret = model(model.dummy_inputs, training=False) # build the network with dummy inputs`.
I am not sure if I used it correctly. I think `tf.keras.mixed_precision.experimental.set_policy` is supposed to be used before constructing / build the model, as the tf page says `Policies can be passed to the 'dtype' argument of layer constructors, or a global policy can be set with 'tf.keras.mixed_precision.experimental.set_policy'`.
I wonder if I can use AMP with tf based transformer models and how. Thanks.
[error.txt](https://github.com/huggingface/transformers/files/3907032/error.txt)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2005/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2005/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2004 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2004/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2004/comments | https://api.github.com/repos/huggingface/transformers/issues/2004/events | https://github.com/huggingface/transformers/issues/2004 | 530,564,118 | MDU6SXNzdWU1MzA1NjQxMTg= | 2,004 | Can we use tf.keras.mixed_precision.experimental.set_policy ? | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"For now we need to use:\r\n\r\n```python\r\ntf.config.optimizer.set_experimental_options({\"auto_mixed_precision\": True})\r\n```\r\n\r\nPlease see [example here](https://github.com/huggingface/transformers/blob/master/examples/run_tf_glue.py).",
"Thanks. I tried it during waiting the answer, and it doesn't speed up the training. I probably can post my model later.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,575 | 1,580 | 1,580 | COLLABORATOR | null | ## ❓ Questions & Help
I want to use `mixed_precision`, and I found [tf.keras.mixed_precision.experimental.Policy](https://www.tensorflow.org/api_docs/python/tf/keras/mixed_precision/experimental/Policy).
So I put `tf.keras.mixed_precision.experimental.set_policy("mixed_float16")` before `TFBertModel.from_pretrained(pretrained_weights)`. When I run the code, I got the following error:
> InvalidArgumentError: cannot compute AddV2 as input #1(zero-based) was expected to be a half tensor but is a float tensor [Op:AddV2] name: tf_bert_model_1/bert/embeddings/add/
which happened at `ret = model(model.dummy_inputs, training=False) # build the network with dummy inputs`.
I am not sure if I used it correctly. I think `tf.keras.mixed_precision.experimental.set_policy` is supposed to be used before constructing / build the model, as the tf page says `Policies can be passed to the 'dtype' argument of layer constructors, or a global policy can be set with 'tf.keras.mixed_precision.experimental.set_policy'`.
I wonder if I can use AMP with tf based transformer models and how. Thanks.
[error.txt](https://github.com/huggingface/transformers/files/3907032/error.txt)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2004/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/2004/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2003 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2003/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2003/comments | https://api.github.com/repos/huggingface/transformers/issues/2003/events | https://github.com/huggingface/transformers/issues/2003 | 530,553,015 | MDU6SXNzdWU1MzA1NTMwMTU= | 2,003 | Where I could find the vocab.json for XLNet | {
"login": "kugwzk",
"id": 15382517,
"node_id": "MDQ6VXNlcjE1MzgyNTE3",
"avatar_url": "https://avatars.githubusercontent.com/u/15382517?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kugwzk",
"html_url": "https://github.com/kugwzk",
"followers_url": "https://api.github.com/users/kugwzk/followers",
"following_url": "https://api.github.com/users/kugwzk/following{/other_user}",
"gists_url": "https://api.github.com/users/kugwzk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kugwzk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kugwzk/subscriptions",
"organizations_url": "https://api.github.com/users/kugwzk/orgs",
"repos_url": "https://api.github.com/users/kugwzk/repos",
"events_url": "https://api.github.com/users/kugwzk/events{/privacy}",
"received_events_url": "https://api.github.com/users/kugwzk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"If you want to have `xlnet_config.json`, which is a JSON file which specifies the hyper-parameters of the XLNet model, you can download the .zip file from [here](https://github.com/zihangdai/xlnet/blob/5cd50bc451436e188a8e7fea15358d5a8c916b72/README.md) which contains the pre-trained weights of XLNet model.\r\n\r\n> ## Questions & Help\r\n> I notice in tokenizer_xlnet.py there is not the vocab,json only spiece model. So I want to know where I could find the vocab.json? And what I should rename the file ?",
"I downloaded the pre-trained model, the config file and the sentence piece model, but when I run the code I found the vocab_size = -1. Did I miss something?",
"An example of the content of `xlnet_config.json` is the following:\r\n```\r\n{\r\n \"d_head\": 64, \r\n \"d_inner\": 3072, \r\n \"d_model\": 768, \r\n \"ff_activation\": \"gelu\", \r\n \"n_head\": 12, \r\n \"n_layer\": 12, \r\n \"n_token\": 32000, \r\n \"untie_r\": true\r\n}\r\n```\r\n\r\n> When i run the code I found the vocab_size=-1\r\n\r\nWhich code you are talking about?\r\n",
"You can either download from the S3 repo or the script is supposed to automatically download the vocab file. Make sure you have a working internet connection."
] | 1,575 | 1,575 | 1,575 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
I notice in tokenizer_xlnet.py there is not the vocab,json only spiece model. So I want to know where I could find the vocab.json? And what I should rename the file ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2003/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2003/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2002 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2002/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2002/comments | https://api.github.com/repos/huggingface/transformers/issues/2002/events | https://github.com/huggingface/transformers/pull/2002 | 530,512,950 | MDExOlB1bGxSZXF1ZXN0MzQ3MTc4NDY3 | 2,002 | Always use SequentialSampler during evaluation | {
"login": "ethanjperez",
"id": 6402205,
"node_id": "MDQ6VXNlcjY0MDIyMDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6402205?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ethanjperez",
"html_url": "https://github.com/ethanjperez",
"followers_url": "https://api.github.com/users/ethanjperez/followers",
"following_url": "https://api.github.com/users/ethanjperez/following{/other_user}",
"gists_url": "https://api.github.com/users/ethanjperez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ethanjperez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ethanjperez/subscriptions",
"organizations_url": "https://api.github.com/users/ethanjperez/orgs",
"repos_url": "https://api.github.com/users/ethanjperez/repos",
"events_url": "https://api.github.com/users/ethanjperez/events{/privacy}",
"received_events_url": "https://api.github.com/users/ethanjperez/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2002?src=pr&el=h1) Report\n> Merging [#2002](https://codecov.io/gh/huggingface/transformers/pull/2002?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b0ee7c7df3d49a819c4d6cef977214bd91f5c075?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2002?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2002 +/- ##\n=======================================\n Coverage 84.05% 84.05% \n=======================================\n Files 105 105 \n Lines 15555 15555 \n=======================================\n Hits 13075 13075 \n Misses 2480 2480\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2002?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2002?src=pr&el=footer). Last update [b0ee7c7...508f939](https://codecov.io/gh/huggingface/transformers/pull/2002?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Yes, this works! Thank you @ethanjperez."
] | 1,575 | 1,575 | 1,575 | CONTRIBUTOR | null | When evaluating, shouldn't we always use the SequentialSampler instead of DistributedSampler? Evaluation only runs on 1 GPU no matter what, so if you use the DistributedSampler with N GPUs, I think you'll only evaluate on 1/N of the evaluation set. That's at least what I'm finding when I run an older/modified version of this repo. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2002/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2002/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2002",
"html_url": "https://github.com/huggingface/transformers/pull/2002",
"diff_url": "https://github.com/huggingface/transformers/pull/2002.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2002.patch",
"merged_at": 1575386140000
} |
https://api.github.com/repos/huggingface/transformers/issues/2001 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2001/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2001/comments | https://api.github.com/repos/huggingface/transformers/issues/2001/events | https://github.com/huggingface/transformers/issues/2001 | 530,480,780 | MDU6SXNzdWU1MzA0ODA3ODA= | 2,001 | GPT2: how to construct batch for Language Modeling | {
"login": "cbaziotis",
"id": 5629093,
"node_id": "MDQ6VXNlcjU2MjkwOTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5629093?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cbaziotis",
"html_url": "https://github.com/cbaziotis",
"followers_url": "https://api.github.com/users/cbaziotis/followers",
"following_url": "https://api.github.com/users/cbaziotis/following{/other_user}",
"gists_url": "https://api.github.com/users/cbaziotis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cbaziotis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cbaziotis/subscriptions",
"organizations_url": "https://api.github.com/users/cbaziotis/orgs",
"repos_url": "https://api.github.com/users/cbaziotis/repos",
"events_url": "https://api.github.com/users/cbaziotis/events{/privacy}",
"received_events_url": "https://api.github.com/users/cbaziotis/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"> I am a little confused about how to prepare input bathces for GPT2LMHeadModel. I want to use GPT2 as an LM. For instance, I want to generate probability distributions over the vocabulary at each timestep, as well as computing the perplexities of sentences. It is important to note that I am working with sentences and not documents, so I will have to pad the inputs in the batch.\r\n> \r\n> ```python\r\n> from transformers import GPT2Tokenizer, GPT2LMHeadModel\r\n> \r\n> # Prepare model\r\n> tokenizer = GPT2Tokenizer.from_pretrained('gpt2')\r\n> model = GPT2LMHeadModel.from_pretrained('gpt2')\r\n> model.eval()\r\n> model.to('cuda')\r\n> \r\n> # input sentences\r\n> batch = ['this is a sentence.',\r\n> 'this is another sentence.',\r\n> 'this is another even longer sentence.']\r\n> ```\r\n> \r\n> ## Question 1: Special tokens\r\n> a) Do I have to add a bos token id on my own or is it handled internally by GPT2Tokenizer? Same for the eos token.\r\n> \r\n> ```\r\n> # tokenize\r\n> tokens = [tokenizer.encode(x) for x in batch]\r\n> \r\n> # add BOS and EOS\r\n> tokens = [[tokenizer.bos_token_id] + x + [tokenizer.eos_token_id] for x in tokens]\r\n> ```\r\n> \r\n> ```\r\n> [[50256, 428, 318, 257, 6827, 13, 50256],\r\n> [50256, 428, 318, 1194, 6827, 13, 50256],\r\n> [50256, 428, 318, 1194, 772, 2392, 6827, 13, 50256]]\r\n> ```\r\n> \r\n> b) `tokenizer.encode(x)` gives me a warning \"This tokenizer does not make use of special tokens. Input is returned with no modification.\" I replaced it with `tokenizer.convert_tokens_to_ids(tokenizer.tokenize(x, add_prefix_space=True))` and the warning went away, but I am not sure what is the difference. Which tokenization should I use?\r\n> \r\n> c) By looking at the properties of an instance of GPT2Tokenizer, I see that `bos_token` and `eos_token` are the same. Is this correct?\r\n> \r\n> ## Question 2: Padding\r\n> I want to pad based on the longest sentence in the batch. This is how I usually do it.\r\n> \r\n> ```\r\n> batch = pad_sequence([torch.LongTensor(x) for x in tokens], batch_first=True).to('cuda')\r\n> ```\r\n> \r\n> ```\r\n> tensor([[50256, 428, 318, 257, 6827, 13, 50256, 0, 0],\r\n> [50256, 428, 318, 1194, 6827, 13, 50256, 0, 0],\r\n> [50256, 428, 318, 1194, 772, 2392, 6827, 13, 50256]])\r\n> ```\r\n> \r\n> a) What id does the model expect for the padded tokens? Do I have to pass the token id as an argument to the model or the tokenizer or you have a predifined one?\r\n> \r\n> ## Question 3: Model input\r\n> How is GPT2 made aware of the padded steps? For instance for an RNN I would do something like this:\r\n> \r\n> ```\r\n> lengths = (batch != 0).sum(-1) # tensor([7, 7, 9])\r\n> packed = pack_padded_sequence(x, lengths, batch_first=True)\r\n> out_packed, hn = rnn(packed)\r\n> ```\r\n> \r\n> but for GPT2 I havent found an example. The only ones i found are with batch size 1. So something like this wont work as expected:\r\n> \r\n> ```\r\n> outputs = model(x, labels=x) # labels are shifted inside the model, right?\r\n> loss, logits = outputs[:2]\r\n> ```\r\n> \r\n> # Update\r\n> So there are still things unclear, but from reading other issues this is my current understanding:\r\n> \r\n> * GPT2 has no padding token, as it was trained on documents and not sentences.\r\n> * In order to use GPT2 with variable length inputs, we can apply padding with an arbitrary token and ensure that those tokens are not used by the model with an `attention_mask`.\r\n> * As for the labels, we should replace **only** on the `labels` variable the padded token ids with `-1`.\r\n> So based on that, here is my current toy implementation:\r\n> \r\n> ```python\r\n> inputs = [\r\n> 'this is a sentence.',\r\n> 'this is another sentence.',\r\n> 'this is another even longer sentence.', ]\r\n> \r\n> # tokenize\r\n> # tokens = [tokenizer.encode(x) for x in batch]\r\n> tokens = [tokenizer.convert_tokens_to_ids(\r\n> tokenizer.tokenize(x, add_prefix_space=True))\r\n> for x in inputs]\r\n> \r\n> # add BOS and EOS\r\n> tokens = [[tokenizer.bos_token_id] + x + [tokenizer.eos_token_id]\r\n> for x in tokens]\r\n> \r\n> # padding_value can be whatever...\r\n> inputs = pad_sequence([torch.LongTensor(x) for x in tokens], batch_first=True, padding_value=0).to('cuda')\r\n> # 1 for real tokens and 0 for padded tokens\r\n> mask = (inputs != 0).float()\r\n> # replace the ids of the padded tokens (where token_id==padded_id) with `-1`\r\n> labels = inputs.masked_fill(inputs == 0, -1)\r\n> \r\n> outputs = model(inputs, attention_mask=mask, labels=labels)\r\n> loss, logits = outputs[:2]\r\n> ```\r\n> \r\n> Is this correct??\r\n> \r\n> ### Bug: Padded tokens are not excluded from the loss\r\n> However, I computed the loss on my own and found your implementation does not take into account the padded tokens when averaging, unless I am missing something\r\n> \r\n> ```python\r\n> _logits = logits.view(-1, logits.size(-1)) # flatten logits\r\n> _labels = torch.cat([inputs[:, 1:], inputs[:, :1] * 0], dim=1).view(-1) # shift inputs one position to the left and flatten\r\n> loss_real_avg = F.cross_entropy(_logits, _labels, ignore_index=0, reduction='sum') / mask.sum() # ignore padded timesteps\r\n> loss_naive_avg = F.cross_entropy(_logits, _labels, ignore_index=0, reduction='mean')\r\n> \r\n> print(\"GPT2 loss:\", loss.item())\r\n> print(\"loss_naive_avg:\", loss_naive_avg.item())\r\n> print(\"loss_real_avg:\", loss_real_avg.item())\r\n> ```\r\n> \r\n> ```\r\n> GPT2 loss: 4.664564609527588\r\n> loss_naive_avg: 4.664564609527588\r\n> loss_real_avg: 4.056143283843994\r\n> ```\r\n\r\nWhat you proposed seems a valid walk-around to me. Also, look at #1464, which talked about adding `pad_token` to `tokenizer` and `embedding`. Perhaps that will help as well.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Check out [#3311](https://github.com/huggingface/transformers/issues/3311#issuecomment-601264426). GPT2 doesn't add BOS or EOS token, you must do it manually or use a tokenizer to do so. ",
"> Bug: Padded tokens are not excluded from the loss\r\n> \r\n> However, I computed the loss on my own and found your implementation does not take into account the padded tokens when averaging, unless I am missing something\r\n> ```py\r\n> _logits = logits.view(-1, logits.size(-1)) # flatten logits\r\n> _labels = torch.cat([inputs[:, 1:], inputs[:, :1] * 0], dim=1).view(-1) # shift inputs one position to the left and flatten\r\n> loss_real_avg = F.cross_entropy(_logits, _labels, ignore_index=0, reduction='sum') / mask.sum() # ignore padded timesteps\r\n> loss_naive_avg = F.cross_entropy(_logits, _labels, ignore_index=0, reduction='mean')\r\n> print(\"GPT2 loss:\", loss.item())\r\n> print(\"loss_naive_avg:\", loss_naive_avg.item())\r\n> print(\"loss_real_avg:\", loss_real_avg.item())\r\n> GPT2 loss: 4.664564609527588\r\n> loss_naive_avg: 4.664564609527588\r\n> loss_real_avg: 4.056143283843994\r\n\r\nFor this bug, you may need to set `ignore_index` to -1 instead of 0 in `F.cross_entropy` according to this line:\r\n> ```py\r\n> # replace the ids of the padded tokens (where token_id==padded_id) with `-1`\r\n> labels = inputs.masked_fill(inputs == 0, -1)\r\n> ```"
] | 1,575 | 1,670 | 1,581 | NONE | null | I am a little confused about how to prepare input bathces for GPT2LMHeadModel. I want to use GPT2 as an LM. For instance, I want to generate probability distributions over the vocabulary at each timestep, as well as computing the perplexities of sentences. It is important to note that I am working with sentences and not documents, so I will have to pad the inputs in the batch.
```python
from transformers import GPT2Tokenizer, GPT2LMHeadModel
# Prepare model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2LMHeadModel.from_pretrained('gpt2')
model.eval()
model.to('cuda')
# input sentences
batch = ['this is a sentence.',
'this is another sentence.',
'this is another even longer sentence.']
```
## Question 1: Special tokens
a) Do I have to add a bos token id on my own or is it handled internally by GPT2Tokenizer? Same for the eos token.
```
# tokenize
tokens = [tokenizer.encode(x) for x in batch]
# add BOS and EOS
tokens = [[tokenizer.bos_token_id] + x + [tokenizer.eos_token_id] for x in tokens]
```
```
[[50256, 428, 318, 257, 6827, 13, 50256],
[50256, 428, 318, 1194, 6827, 13, 50256],
[50256, 428, 318, 1194, 772, 2392, 6827, 13, 50256]]
```
b) `tokenizer.encode(x)` gives me a warning "This tokenizer does not make use of special tokens. Input is returned with no modification." I replaced it with `tokenizer.convert_tokens_to_ids(tokenizer.tokenize(x, add_prefix_space=True))` and the warning went away, but I am not sure what is the difference. Which tokenization should I use?
c) By looking at the properties of an instance of GPT2Tokenizer, I see that `bos_token` and `eos_token` are the same. Is this correct?
## Question 2: Padding
I want to pad based on the longest sentence in the batch. This is how I usually do it.
```
batch = pad_sequence([torch.LongTensor(x) for x in tokens], batch_first=True).to('cuda')
```
```
tensor([[50256, 428, 318, 257, 6827, 13, 50256, 0, 0],
[50256, 428, 318, 1194, 6827, 13, 50256, 0, 0],
[50256, 428, 318, 1194, 772, 2392, 6827, 13, 50256]])
```
a) What id does the model expect for the padded tokens? Do I have to pass the token id as an argument to the model or the tokenizer or you have a predifined one?
## Question 3: Model input
How is GPT2 made aware of the padded steps? For instance for an RNN I would do something like this:
```
lengths = (batch != 0).sum(-1) # tensor([7, 7, 9])
packed = pack_padded_sequence(x, lengths, batch_first=True)
out_packed, hn = rnn(packed)
```
but for GPT2 I havent found an example. The only ones i found are with batch size 1. So something like this wont work as expected:
```
outputs = model(x, labels=x) # labels are shifted inside the model, right?
loss, logits = outputs[:2]
```
---
# Update
So there are still things unclear, but from reading other issues this is my current understanding:
- GPT2 has no padding token, as it was trained on documents and not sentences.
- In order to use GPT2 with variable length inputs, we can apply padding with an arbitrary token and ensure that those tokens are not used by the model with an `attention_mask`.
- As for the labels, we should replace **only** on the `labels` variable the padded token ids with `-1`.
So based on that, here is my current toy implementation:
```python
inputs = [
'this is a sentence.',
'this is another sentence.',
'this is another even longer sentence.', ]
# tokenize
# tokens = [tokenizer.encode(x) for x in batch]
tokens = [tokenizer.convert_tokens_to_ids(
tokenizer.tokenize(x, add_prefix_space=True))
for x in inputs]
# add BOS and EOS
tokens = [[tokenizer.bos_token_id] + x + [tokenizer.eos_token_id]
for x in tokens]
# padding_value can be whatever...
inputs = pad_sequence([torch.LongTensor(x) for x in tokens], batch_first=True, padding_value=0).to('cuda')
# 1 for real tokens and 0 for padded tokens
mask = (inputs != 0).float()
# replace the ids of the padded tokens (where token_id==padded_id) with `-1`
labels = inputs.masked_fill(inputs == 0, -1)
outputs = model(inputs, attention_mask=mask, labels=labels)
loss, logits = outputs[:2]
```
Is this correct??
### Bug: Padded tokens are not excluded from the loss
However, I computed the loss on my own and found your implementation does not take into account the padded tokens when averaging, unless I am missing something
```python
_logits = logits.view(-1, logits.size(-1)) # flatten logits
_labels = torch.cat([inputs[:, 1:], inputs[:, :1] * 0], dim=1).view(-1) # shift inputs one position to the left and flatten
loss_real_avg = F.cross_entropy(_logits, _labels, ignore_index=0, reduction='sum') / mask.sum() # ignore padded timesteps
loss_naive_avg = F.cross_entropy(_logits, _labels, ignore_index=0, reduction='mean')
print("GPT2 loss:", loss.item())
print("loss_naive_avg:", loss_naive_avg.item())
print("loss_real_avg:", loss_real_avg.item())
```
```
GPT2 loss: 4.664564609527588
loss_naive_avg: 4.664564609527588
loss_real_avg: 4.056143283843994
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2001/reactions",
"total_count": 28,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 23
} | https://api.github.com/repos/huggingface/transformers/issues/2001/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2000 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2000/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2000/comments | https://api.github.com/repos/huggingface/transformers/issues/2000/events | https://github.com/huggingface/transformers/issues/2000 | 530,393,842 | MDU6SXNzdWU1MzAzOTM4NDI= | 2,000 | Wrong tokenization in Transformer-XL documentation | {
"login": "DavidNemeskey",
"id": 690386,
"node_id": "MDQ6VXNlcjY5MDM4Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/690386?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DavidNemeskey",
"html_url": "https://github.com/DavidNemeskey",
"followers_url": "https://api.github.com/users/DavidNemeskey/followers",
"following_url": "https://api.github.com/users/DavidNemeskey/following{/other_user}",
"gists_url": "https://api.github.com/users/DavidNemeskey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DavidNemeskey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DavidNemeskey/subscriptions",
"organizations_url": "https://api.github.com/users/DavidNemeskey/orgs",
"repos_url": "https://api.github.com/users/DavidNemeskey/repos",
"events_url": "https://api.github.com/users/DavidNemeskey/events{/privacy}",
"received_events_url": "https://api.github.com/users/DavidNemeskey/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Indeed, do you want to fix this in a PR?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,575 | 1,582 | 1,582 | CONTRIBUTOR | null | ## 🐛 Bug
This is a documentation-related bug. In the [TransfoXL documentation](https://huggingface.co/transformers/model_doc/transformerxl.html), the tokenization example is wrong. The snippet goes:
```
tokenizer = TransfoXLTokenizer.from_pretrained('transfo-xl-wt103')
...
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0) # Batch size 1
```
This code outputs the tokens `[24, 617, 3225, 23, 16072]`, of which `24` is `<unk>`.
The problem comes from the fact that Transformer-XL does **not** use a wordpiece vocabulary, but a regular (whole-word) one. Also, in WT-103, punctuation marks are split from the words. Consequently, the example should read instead (note that space in from of `,`):
```
tokenizer = TransfoXLTokenizer.from_pretrained('transfo-xl-wt103')
...
input_ids = torch.tensor(tokenizer.encode("Hello , my dog is cute")).unsqueeze(0) # Batch size 1
```
It would also be nice to warn the user about this fact in the documentation, perhaps in `TransfoXLTokenizer`'s docstring? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2000/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2000/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1999 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1999/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1999/comments | https://api.github.com/repos/huggingface/transformers/issues/1999/events | https://github.com/huggingface/transformers/issues/1999 | 530,306,137 | MDU6SXNzdWU1MzAzMDYxMzc= | 1,999 | Training masked language model with Tensorflow | {
"login": "blackcat84",
"id": 25528598,
"node_id": "MDQ6VXNlcjI1NTI4NTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/25528598?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/blackcat84",
"html_url": "https://github.com/blackcat84",
"followers_url": "https://api.github.com/users/blackcat84/followers",
"following_url": "https://api.github.com/users/blackcat84/following{/other_user}",
"gists_url": "https://api.github.com/users/blackcat84/gists{/gist_id}",
"starred_url": "https://api.github.com/users/blackcat84/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/blackcat84/subscriptions",
"organizations_url": "https://api.github.com/users/blackcat84/orgs",
"repos_url": "https://api.github.com/users/blackcat84/repos",
"events_url": "https://api.github.com/users/blackcat84/events{/privacy}",
"received_events_url": "https://api.github.com/users/blackcat84/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"> I've noticed that in the run_lm_finetuning example the model has an additional argument masked_lm_labels\r\n\r\nYes, I have the same issue here. Did you manage to port the example code to TF?\r\n\r\nIn the torch models the argument is interpreted as follows:\r\n\r\n```\r\n if masked_lm_labels is not None:\r\n loss_fct = CrossEntropyLoss(ignore_index=-1) # -1 index = padding token\r\n masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), masked_lm_labels.view(-1))\r\n outputs = (masked_lm_loss,) + outputs\r\n```\r\n\r\nwhich means that one has to define a custom cross-entropy loss in Tensorflow.",
"Unfortunately no, I had a look around in order to implement the custom cross-entropy we are talking about. I switched to Pytorch since it wasn't clear to me whether switching to the custom loss would solve all the problems I had.",
"I see. I guess I will take the same road ;-) At least I can do the finetuning in torch and later convert the model to TF. Thanks for sharing the info!\r\n\r\nBTW I found the implementation of the custom loss that we are talking about in google repo:\r\n\r\n```python\r\n # The `positions` tensor might be zero-padded (if the sequence is too\r\n # short to have the maximum number of predictions). The `label_weights`\r\n # tensor has a value of 1.0 for every real prediction and 0.0 for the\r\n # padding predictions.\r\n per_example_loss = -tf.reduce_sum(log_probs * one_hot_labels, axis=[-1])\r\n numerator = tf.reduce_sum(label_weights * per_example_loss)\r\n denominator = tf.reduce_sum(label_weights) + 1e-5\r\n loss = numerator / denominator\r\n```\r\n\r\nHere is the link to the original code: \r\n\r\nhttps://github.com/google-research/bert/blob/cc7051dc592802f501e8a6f71f8fb3cf9de95dc9/run_pretraining.py#L273-L280\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"While everyone correctly pointed out that you need a loss function which handles masks, the original error message posted here is actually unrelated to that.\r\n\r\n> ```python\r\n> model.compile(optimizer=tf.optimizers.Adam(lr=params['learning_rate']), loss='binary_crossentropy')\r\n> ```\r\n\r\nYour model is compiled with binary crossenropy, e.g. one hot encoded binary labels of shape (batch size x len x len(dict)), while you provide the labels as integers representing the token values (5673 etc) with shape (batchsize x len). This leads to a shape mismatch.\r\nThe error message comes from the comparison of the last values of the shapes len(dict) vs textlen.\r\n\r\n> tensorflow.python.framework.errors_impl.InvalidArgumentError: Dimensions must be equal, but are 10 and 119547 for 'loss/output_1_loss/mul' (op: 'Mul') with input shapes: [?,10], [?,?,119547].\r\n\r\nUsing tf.keras.losses.SparseCategoricalCrossentropy solves the error message, but of course you will still need to implement a masked loss function to use it properly.\r\n",
"Is there anyone who went on with tensorflow? I don't want to switch to pytorch. I will try to implement a masked loss function. If there is anyone already did this, I would be happy to know.",
"I made an attempt on kaggle: https://www.kaggle.com/riblidezso/finetune-xlm-roberta-on-jigsaw-test-data-with-mlm",
"> I made an attempt on kaggle: https://www.kaggle.com/riblidezso/finetune-xlm-roberta-on-jigsaw-test-data-with-mlm\r\n\r\nIt is a very interesting and usefull notebook. Thanks for sharing",
"> I made an attempt on kaggle: https://www.kaggle.com/riblidezso/finetune-xlm-roberta-on-jigsaw-test-data-with-mlm\r\n\r\nSuper useful, thank you!"
] | 1,575 | 1,608 | 1,582 | NONE | null | ## ❓ Questions & Help
I'm trying to fine-tune a masked language model starting from bert-base-multilingual-cased with Tensorflow using the PyTorch-based example _examples/run_lm_finetuning_ as starting point. I'd like to take the multilingual model and adapt it to the Italian language.
Unfortunately I'm unable to find examples over the internet for the TFBertForMaskedLM model in training mode, so I hope this is the appropriate place for this question.
System and libraries
> Platform Linux-5.0.0-36-generic-x86_64-with-debian-buster-sid
> Python 3.7.5 (default, Oct 25 2019, 15:51:11)
> [GCC 7.3.0]
> PyTorch 1.3.1
> Tensorflow 2.0.0
> Transformers 2.2.0
I first convert my train sentences in 4 arrays:
1) train_ids_masked: tokens ids with special tokens and masking + padding up to max_seq_length = 10
2) train_attnmasks: masks for attention (padding masks)
3) train_segments: masks for sentence (constant array since sentences are independent)
4) train_labels: original masked tokens + UNK tokens everywhere else
Every array has shape (num sentences, max_seq_length) = (72,10)
Then I define the model and print the summary
```python
pre_trained_model = 'bert-base-multilingual-cased'
config = transformers.BertConfig.from_pretrained(pre_trained_model)
model = transformers.TFBertForMaskedLM.from_pretrained(pre_trained_model, config=config)
model.compile(optimizer=tf.optimizers.Adam(lr=params['learning_rate']), loss='binary_crossentropy')
print(model.summary())
```
which outputs
```
Model: "tf_bert_for_masked_lm_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
bert (TFBertMainLayer) multiple 177853440
_________________________________________________________________
mlm___cls (TFBertMLMHead) multiple 92920059
=================================================================
Total params: 178,565,115
Trainable params: 178,565,115
Non-trainable params: 0
```
Then I try to train the model
```python
model.fit([train_ids_masked, train_attnmasks, train_segments], train_labels, epochs=1, batch_size=20)
```
The model trains over the first batch but returns the following error
```
Train on 72 samples
20/72 [=======>......................] - ETA: 7sTraceback (most recent call last):
File "/home/andrea/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 1610, in _create_c_op
c_op = c_api.TF_FinishOperation(op_desc)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Dimensions must be equal, but are 10 and 119547 for 'loss/output_1_loss/mul' (op: 'Mul') with input shapes: [?,10], [?,?,119547].
```
when calculating the loss, trying to compare the padding length max_seq_length (= 10) to the vocabulary size (= 119547).
I've also tried to define the model in the following way
```python
inp_ids = tf.keras.layers.Input(shape=(max_seq_length, ), dtype='int32', name="bert_input_ids")
inp_attnmasks = tf.keras.layers.Input(shape=(max_seq_length, ), dtype='int32', name="bert_input_attention_masks")
inp_segments = tf.keras.layers.Input(shape=(max_seq_length, ), dtype='int32', name="bert_input_segment_ids")
inputs = [inp_ids, inp_attnmasks, inp_segments]
outputs = transformers.TFBertForMaskedLM.from_pretrained(pre_trained_model)(inputs)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
model.compile(optimizer=tf.optimizers.Adam(lr=params['learning_rate']), loss='binary_crossentropy')
```
but I get the same error.
My input and label arrays have the same shape as the ones in the _run_lm_finetuning_ example and my model is simply the Tensorflow equivalent to the model used there.
What am I doing wrong?
Is it possible that this is related to the loss calculation rather than the definition of the model?
I've noticed that in the _run_lm_finetuning_ example the model has an additional argument **masked_lm_labels**
```python
outputs = model(inputs, masked_lm_labels=labels) if args.mlm else model(inputs, labels=labels)
```
that allows to compute the loss only on masked tokens using PyTorch, but this option is not present in TFBertForMaskedLM, how can I achieve that? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1999/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1999/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1998 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1998/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1998/comments | https://api.github.com/repos/huggingface/transformers/issues/1998/events | https://github.com/huggingface/transformers/pull/1998 | 530,299,983 | MDExOlB1bGxSZXF1ZXN0MzQ3MDEwMDQ4 | 1,998 | Added Camembert to available models | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1998?src=pr&el=h1) Report\n> Merging [#1998](https://codecov.io/gh/huggingface/transformers/pull/1998?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1ab8dc44b3d84ed1894f5b6a6fab58fb39298fc7?src=pr&el=desc) will **increase** coverage by `1.28%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1998?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1998 +/- ##\n==========================================\n+ Coverage 82.83% 84.11% +1.28% \n==========================================\n Files 105 105 \n Lines 15545 15545 \n==========================================\n+ Hits 12877 13076 +199 \n+ Misses 2668 2469 -199\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1998?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/1998/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX29wZW5haS5weQ==) | `82% <0%> (+1.33%)` | :arrow_up: |\n| [transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/1998/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2N0cmwucHk=) | `96.47% <0%> (+2.2%)` | :arrow_up: |\n| [transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/1998/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3hsbmV0LnB5) | `73.61% <0%> (+2.43%)` | :arrow_up: |\n| [transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1998/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3JvYmVydGEucHk=) | `71.76% <0%> (+12.35%)` | :arrow_up: |\n| [transformers/tests/modeling\\_tf\\_common\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1998/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2NvbW1vbl90ZXN0LnB5) | `97.08% <0%> (+15.53%)` | :arrow_up: |\n| [transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1998/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3B5dG9yY2hfdXRpbHMucHk=) | `95.13% <0%> (+85.41%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1998?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1998?src=pr&el=footer). Last update [1ab8dc4...a80f3cd](https://codecov.io/gh/huggingface/transformers/pull/1998?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks!"
] | 1,575 | 1,575 | 1,575 | NONE | null | Added Camembert to the available models in the `run_lm_finetuning.py` example. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1998/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1998/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1998",
"html_url": "https://github.com/huggingface/transformers/pull/1998",
"diff_url": "https://github.com/huggingface/transformers/pull/1998.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1998.patch",
"merged_at": 1575055023000
} |
https://api.github.com/repos/huggingface/transformers/issues/1997 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1997/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1997/comments | https://api.github.com/repos/huggingface/transformers/issues/1997/events | https://github.com/huggingface/transformers/issues/1997 | 530,291,822 | MDU6SXNzdWU1MzAyOTE4MjI= | 1,997 | How to get a spiece.model from customize chinese vocab.txt in Albert xlnet ? | {
"login": "ciel-zhang",
"id": 18700473,
"node_id": "MDQ6VXNlcjE4NzAwNDcz",
"avatar_url": "https://avatars.githubusercontent.com/u/18700473?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ciel-zhang",
"html_url": "https://github.com/ciel-zhang",
"followers_url": "https://api.github.com/users/ciel-zhang/followers",
"following_url": "https://api.github.com/users/ciel-zhang/following{/other_user}",
"gists_url": "https://api.github.com/users/ciel-zhang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ciel-zhang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ciel-zhang/subscriptions",
"organizations_url": "https://api.github.com/users/ciel-zhang/orgs",
"repos_url": "https://api.github.com/users/ciel-zhang/repos",
"events_url": "https://api.github.com/users/ciel-zhang/events{/privacy}",
"received_events_url": "https://api.github.com/users/ciel-zhang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Have you taken a look at [sentencepiece](https://github.com/google/sentencepiece)?",
"请问这个问题您解决了吗",
"I have the same problem. Have you solved it?",
"> Have you taken a look at [sentencepiece](https://github.com/google/sentencepiece)?\r\n\r\nI have taken a look at sentencepiece documents, but found nothing to build a spiece.model from customized chinese vocab.txt in ALBERT. Do you have any solution to solve this problem?",
"I think that the chinese version of albert uses wordpiece model instead of sentencepiece model.\r\n[https://github.com/google-research/ALBERT/issues/58](url)\r\n\r\n> For Chinese models, we use word piece model provided by Jacob as sentence piece get worse performance on reading comprehension tasks for Chinese.\r\n\r\nhttps://github.com/google-research/ALBERT/blob/master/tokenization.py\r\n```python\r\nclass FullTokenizer(object):\r\n \"\"\"Runs end-to-end tokenziation.\"\"\"\r\n\r\n def __init__(self, vocab_file, do_lower_case=True, spm_model_file=None):\r\n self.vocab = None\r\n self.sp_model = None\r\n if spm_model_file:\r\n self.sp_model = spm.SentencePieceProcessor()\r\n tf.logging.info(\"loading sentence piece model\")\r\n self.sp_model.Load(spm_model_file)\r\n # Note(mingdachen): For the purpose of consisent API, we are\r\n # generating a vocabulary for the sentence piece tokenizer.\r\n self.vocab = {self.sp_model.IdToPiece(i): i for i\r\n in range(self.sp_model.GetPieceSize())}\r\n else:\r\n self.vocab = load_vocab(vocab_file)\r\n self.basic_tokenizer = BasicTokenizer(do_lower_case=do_lower_case)\r\n self.wordpiece_tokenizer = WordpieceTokenizer(vocab=self.vocab)\r\n self.inv_vocab = {v: k for k, v in self.vocab.items()}\r\n```\r\nWhen the sentencepiece model is None, the full tokenizer is initialized with a basic tokenizer and a workpiece tokenizer.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,575 | 1,590 | 1,590 | NONE | null | ## ❓ Questions & Help
How to get a spiece.model from customize chinese vocab.txt in Albert xlnet ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1997/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1997/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1996 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1996/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1996/comments | https://api.github.com/repos/huggingface/transformers/issues/1996/events | https://github.com/huggingface/transformers/issues/1996 | 530,284,664 | MDU6SXNzdWU1MzAyODQ2NjQ= | 1,996 | ALBERT is missing from AutoClasses | {
"login": "eladsegal",
"id": 13485709,
"node_id": "MDQ6VXNlcjEzNDg1NzA5",
"avatar_url": "https://avatars.githubusercontent.com/u/13485709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eladsegal",
"html_url": "https://github.com/eladsegal",
"followers_url": "https://api.github.com/users/eladsegal/followers",
"following_url": "https://api.github.com/users/eladsegal/following{/other_user}",
"gists_url": "https://api.github.com/users/eladsegal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eladsegal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eladsegal/subscriptions",
"organizations_url": "https://api.github.com/users/eladsegal/orgs",
"repos_url": "https://api.github.com/users/eladsegal/repos",
"events_url": "https://api.github.com/users/eladsegal/events{/privacy}",
"received_events_url": "https://api.github.com/users/eladsegal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,575 | 1,575 | 1,575 | CONTRIBUTOR | null | Pull request to fix this: https://github.com/huggingface/transformers/pull/1995
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1996/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1996/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1995 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1995/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1995/comments | https://api.github.com/repos/huggingface/transformers/issues/1995/events | https://github.com/huggingface/transformers/pull/1995 | 530,282,755 | MDExOlB1bGxSZXF1ZXN0MzQ2OTk1ODAy | 1,995 | Add ALBERT to AutoClasses | {
"login": "eladsegal",
"id": 13485709,
"node_id": "MDQ6VXNlcjEzNDg1NzA5",
"avatar_url": "https://avatars.githubusercontent.com/u/13485709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eladsegal",
"html_url": "https://github.com/eladsegal",
"followers_url": "https://api.github.com/users/eladsegal/followers",
"following_url": "https://api.github.com/users/eladsegal/following{/other_user}",
"gists_url": "https://api.github.com/users/eladsegal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eladsegal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eladsegal/subscriptions",
"organizations_url": "https://api.github.com/users/eladsegal/orgs",
"repos_url": "https://api.github.com/users/eladsegal/repos",
"events_url": "https://api.github.com/users/eladsegal/events{/privacy}",
"received_events_url": "https://api.github.com/users/eladsegal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1995?src=pr&el=h1) Report\n> Merging [#1995](https://codecov.io/gh/huggingface/transformers/pull/1995?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1ab8dc44b3d84ed1894f5b6a6fab58fb39298fc7?src=pr&el=desc) will **increase** coverage by `1.22%`.\n> The diff coverage is `31.25%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1995?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1995 +/- ##\n==========================================\n+ Coverage 82.83% 84.06% +1.22% \n==========================================\n Files 105 105 \n Lines 15545 15561 +16 \n==========================================\n+ Hits 12877 13081 +204 \n+ Misses 2668 2480 -188\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1995?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/1995/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2F1dG8ucHk=) | `30.61% <20%> (-1.21%)` | :arrow_down: |\n| [transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/1995/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl9hdXRvLnB5) | `45% <33.33%> (-0.95%)` | :arrow_down: |\n| [transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/1995/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fYXV0by5weQ==) | `60% <66.66%> (+0.54%)` | :arrow_up: |\n| [transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/1995/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX29wZW5haS5weQ==) | `82% <0%> (+1.33%)` | :arrow_up: |\n| [transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/1995/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2N0cmwucHk=) | `96.47% <0%> (+2.2%)` | :arrow_up: |\n| [transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/1995/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3hsbmV0LnB5) | `73.61% <0%> (+2.43%)` | :arrow_up: |\n| [transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1995/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3JvYmVydGEucHk=) | `71.76% <0%> (+12.35%)` | :arrow_up: |\n| [transformers/tests/modeling\\_tf\\_common\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1995/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2NvbW1vbl90ZXN0LnB5) | `97.08% <0%> (+15.53%)` | :arrow_up: |\n| [transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1995/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3B5dG9yY2hfdXRpbHMucHk=) | `95.13% <0%> (+85.41%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1995?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1995?src=pr&el=footer). Last update [1ab8dc4...a415156](https://codecov.io/gh/huggingface/transformers/pull/1995?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thank you, that's great!"
] | 1,575 | 1,575 | 1,575 | CONTRIBUTOR | null | Adds ALBERT to AutoClasses and also fixes some documentation mistakes along the way | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1995/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1995/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1995",
"html_url": "https://github.com/huggingface/transformers/pull/1995",
"diff_url": "https://github.com/huggingface/transformers/pull/1995.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1995.patch",
"merged_at": 1575044738000
} |
https://api.github.com/repos/huggingface/transformers/issues/1994 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1994/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1994/comments | https://api.github.com/repos/huggingface/transformers/issues/1994/events | https://github.com/huggingface/transformers/issues/1994 | 530,276,139 | MDU6SXNzdWU1MzAyNzYxMzk= | 1,994 | XLnet output_attentions=True raises an exception | {
"login": "roskoN",
"id": 8143425,
"node_id": "MDQ6VXNlcjgxNDM0MjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8143425?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/roskoN",
"html_url": "https://github.com/roskoN",
"followers_url": "https://api.github.com/users/roskoN/followers",
"following_url": "https://api.github.com/users/roskoN/following{/other_user}",
"gists_url": "https://api.github.com/users/roskoN/gists{/gist_id}",
"starred_url": "https://api.github.com/users/roskoN/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/roskoN/subscriptions",
"organizations_url": "https://api.github.com/users/roskoN/orgs",
"repos_url": "https://api.github.com/users/roskoN/repos",
"events_url": "https://api.github.com/users/roskoN/events{/privacy}",
"received_events_url": "https://api.github.com/users/roskoN/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The issue is fixed in #2007 ."
] | 1,575 | 1,575 | 1,575 | CONTRIBUTOR | null | ## 🐛 Bug
I am working on conditional sentences probabilities based on [this code](https://github.com/huggingface/transformers/issues/917#issuecomment-525297746) and whenever `output_attentions=True` and `target_mapping` is provided, there is an exception thrown.
Model I am using (Bert, XLNet....): XLNet ('xlnet-base-cased')
Language I am using the model on (English, Chinese....): English
The problem arise when using:
* [x] my own modified scripts: Setting `output_attentions=True`, throws an exception: `AttributeError: 'tuple' object has no attribute 'permute'`.
The tasks I am working on is:
* [x] my own task or dataset: Using just some sample text
## To Reproduce
Here is a [Google Colab notebook](https://colab.research.google.com/drive/1fkNB0Aqlhtvo3CcHWQ6IqxmCm2Qt9Etn) where the issue can be reproduced as well. Just run all cells.
**Code:**
```python
# https://github.com/huggingface/transformers/issues/917#issuecomment-525297746
import torch
from transformers import XLNetTokenizer, XLNetLMHeadModel
import numpy as np
from scipy.special import softmax
PADDING_TEXT = """In 1991, the remains of Russian Tsar Nicholas II and his family
(except for Alexei and Maria) are discovered.
The voice of Nicholas's young son, Tsarevich Alexei Nikolaevich, narrates the
remainder of the story. 1883 Western Siberia,
a young Grigori Rasputin is asked by his father and a group of men to perform magic.
Rasputin has a vision and denounces one of the men as a horse thief. Although his
father initially slaps him for making such an accusation, Rasputin watches as the
man is chased outside and beaten. Twenty years later, Rasputin sees a vision of
the Virgin Mary, prompting him to become a priest. Rasputin quickly becomes famous,
with people, even a bishop, begging for his blessing. <eod> """
text = "The dog is very cute."
tokenizer = XLNetTokenizer.from_pretrained('xlnet-base-cased')
model = XLNetLMHeadModel.from_pretrained('xlnet-base-cased', output_attentions=True)
tokenize_input = tokenizer.tokenize(PADDING_TEXT + text)
tokenize_text = tokenizer.tokenize(text)
sum_lp = 0.0
for max_word_id in range((len(tokenize_input)-len(tokenize_text)), (len(tokenize_input))):
sent = tokenize_input[:]
input_ids = torch.tensor([tokenizer.convert_tokens_to_ids(sent)])
perm_mask = torch.zeros((1, input_ids.shape[1], input_ids.shape[1]), dtype=torch.float)
perm_mask[:, :, max_word_id:] = 1.0
target_mapping = torch.zeros((1, 1, input_ids.shape[1]), dtype=torch.float)
target_mapping[0, 0, max_word_id] = 1.0
with torch.no_grad():
next_token_logits, attentions = model(input_ids, perm_mask=perm_mask, target_mapping=target_mapping)
word_id = tokenizer.convert_tokens_to_ids([tokenize_input[max_word_id]])[0]
predicted_prob = softmax(np.array(next_token_logits[0][-1]))
lp = np.log(predicted_prob[word_id])
sum_lp += lp
print("sentence logprob =", sum_lp)
```
**Stacktrace:**
```shell
AttributeError Traceback (most recent call last)
<ipython-input-5-6490f5f4333c> in <module>()
38
39 with torch.no_grad():
---> 40 next_token_logits, attentions = model(input_ids, perm_mask=perm_mask, target_mapping=target_mapping)
41
42 word_id = tokenizer.convert_tokens_to_ids([tokenize_input[max_word_id]])[0]
4 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
539 result = self._slow_forward(*input, **kwargs)
540 else:
--> 541 result = self.forward(*input, **kwargs)
542 for hook in self._forward_hooks.values():
543 hook_result = hook(self, input, result)
/usr/local/lib/python3.6/dist-packages/transformers/modeling_xlnet.py in forward(self, input_ids, attention_mask, mems, perm_mask, target_mapping, token_type_ids, input_mask, head_mask, inputs_embeds, labels)
952 input_mask=input_mask,
953 head_mask=head_mask,
--> 954 inputs_embeds=inputs_embeds)
955
956 logits = self.lm_loss(transformer_outputs[0])
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
539 result = self._slow_forward(*input, **kwargs)
540 else:
--> 541 result = self.forward(*input, **kwargs)
542 for hook in self._forward_hooks.values():
543 hook_result = hook(self, input, result)
/usr/local/lib/python3.6/dist-packages/transformers/modeling_xlnet.py in forward(self, input_ids, attention_mask, mems, perm_mask, target_mapping, token_type_ids, input_mask, head_mask, inputs_embeds)
879 outputs = outputs + (hidden_states,)
880 if self.output_attentions:
--> 881 attentions = tuple(t.permute(2, 3, 0, 1).contiguous() for t in attentions)
882 outputs = outputs + (attentions,)
883
/usr/local/lib/python3.6/dist-packages/transformers/modeling_xlnet.py in <genexpr>(.0)
879 outputs = outputs + (hidden_states,)
880 if self.output_attentions:
--> 881 attentions = tuple(t.permute(2, 3, 0, 1).contiguous() for t in attentions)
882 outputs = outputs + (attentions,)
883
AttributeError: 'tuple' object has no attribute 'permute'
```
## Expected behavior
The model should output the logits for each token and the attention values across layers, heads, and tokens.
## Environment
* OS: 18.04.3 LTS (Bionic Beaver)
* Python version: 3.6.8
* PyTorch version: 1.3.1
* PyTorch Transformers version (or branch): 2.2.0
* Using GPU ? No
* Distributed of parallel setup ? N/A
* Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1994/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1994/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1993 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1993/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1993/comments | https://api.github.com/repos/huggingface/transformers/issues/1993/events | https://github.com/huggingface/transformers/issues/1993 | 530,204,605 | MDU6SXNzdWU1MzAyMDQ2MDU= | 1,993 | Why is the weight of linear layer tied to the input embeddings in OpenAIGPTLMHeadModel? | {
"login": "KaitoHH",
"id": 13927774,
"node_id": "MDQ6VXNlcjEzOTI3Nzc0",
"avatar_url": "https://avatars.githubusercontent.com/u/13927774?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KaitoHH",
"html_url": "https://github.com/KaitoHH",
"followers_url": "https://api.github.com/users/KaitoHH/followers",
"following_url": "https://api.github.com/users/KaitoHH/following{/other_user}",
"gists_url": "https://api.github.com/users/KaitoHH/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KaitoHH/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KaitoHH/subscriptions",
"organizations_url": "https://api.github.com/users/KaitoHH/orgs",
"repos_url": "https://api.github.com/users/KaitoHH/repos",
"events_url": "https://api.github.com/users/KaitoHH/events{/privacy}",
"received_events_url": "https://api.github.com/users/KaitoHH/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"The token embedding matrix and the linear layer of the **language modeling head** are indeed tied. The embedding matrix is used to map the vocabulary to vectors of last dimension `hidden_size`. \r\n\r\nThe linear layer is used to do the exact same thing, just the other way around -> mapping the model output of last dimension `hidden_size` to the vocabulary, so that the output may be converted into vocabulary tokens.",
"First of all, thanks for the replay!\r\n\r\nI know that the last linear layer is to mapping `hidden_state` with the size `hidden_size` to the size of the vocabulary, but the linear layer does not need to output concrete tokens, right? It just needs to output a group of probabilities (with the size of vocabulary) with softmax, and these probabilities seem to have nothing to do with the token embedding matrix?\r\n\r\nI have read some other papers, like the CBOW model in word2vec, which uses a linear layer with separate parameters before softmax to train the language model. As a result, the way that GPT does makes me feel confused.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,575 | 1,581 | 1,581 | NONE | null | ## ❓ Questions & Help
Yes the original GPT paper also uses same `W_e` as both token embedding matrix and linear weight, and seems that many succeeding models like GPT-2, XLNet also use the same matrix. In my perspective, the token embedding matrix and the weight in linear layer have nothing related (though they have both the same shape). Could you please explain a bit of that?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1993/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1993/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1992 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1992/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1992/comments | https://api.github.com/repos/huggingface/transformers/issues/1992/events | https://github.com/huggingface/transformers/issues/1992 | 530,196,884 | MDU6SXNzdWU1MzAxOTY4ODQ= | 1,992 | Worse F1 on squad2 with finetune+distil distilroberta-base than just finetune | {
"login": "volker42maru",
"id": 51976664,
"node_id": "MDQ6VXNlcjUxOTc2NjY0",
"avatar_url": "https://avatars.githubusercontent.com/u/51976664?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/volker42maru",
"html_url": "https://github.com/volker42maru",
"followers_url": "https://api.github.com/users/volker42maru/followers",
"following_url": "https://api.github.com/users/volker42maru/following{/other_user}",
"gists_url": "https://api.github.com/users/volker42maru/gists{/gist_id}",
"starred_url": "https://api.github.com/users/volker42maru/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/volker42maru/subscriptions",
"organizations_url": "https://api.github.com/users/volker42maru/orgs",
"repos_url": "https://api.github.com/users/volker42maru/repos",
"events_url": "https://api.github.com/users/volker42maru/events{/privacy}",
"received_events_url": "https://api.github.com/users/volker42maru/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"cc @VictorSanh ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,575 | 1,580 | 1,580 | NONE | null | Hi there,
I am trying to finetune distilroberta on squad2. First, I simply used the _distilroberta-base_ model and finetuned it on the squad2 dataset using `run_squad.py`, which gave me **74/71 F1/EM**. It's a lot worse than the roberta-base accuracy.
Currently, I am trying to finetune+distil (from roberta-base squad2 finetuned model) using `run_squad_w_distillation.py`. My roberta-base squad2 finetuned model has around 83/80 F1/EM. However, when I try to finetune+distil _distilroberta-base_ with the finetuned roberta-base as teacher, I only get around **63/60 F1/EM**. Maybe my hyperparams are way off or I need to train longer? Here's my current config:
- learning_rate=3e-5
- total_batch_size=16
- num_train_epochs=2
- max_seq_length=384
I left all other hyperparams as default.
I also checked out some predictions and it seems the model most of the time predicts _no answer_ as the best answer. In cases where it actually predicts an answer, the accuracy is not that bad.
Would be awesome to get some feedback on this, as I am trying to do inference on CPU and a distilled model would greatly benefit me in this case.
Cheers | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1992/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1992/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1991 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1991/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1991/comments | https://api.github.com/repos/huggingface/transformers/issues/1991/events | https://github.com/huggingface/transformers/issues/1991 | 530,191,102 | MDU6SXNzdWU1MzAxOTExMDI= | 1,991 | Facing AttributeError: 'DataParallel' object has no attribute 'resize_token_embeddings' | {
"login": "engrussman",
"id": 43364003,
"node_id": "MDQ6VXNlcjQzMzY0MDAz",
"avatar_url": "https://avatars.githubusercontent.com/u/43364003?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/engrussman",
"html_url": "https://github.com/engrussman",
"followers_url": "https://api.github.com/users/engrussman/followers",
"following_url": "https://api.github.com/users/engrussman/following{/other_user}",
"gists_url": "https://api.github.com/users/engrussman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/engrussman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/engrussman/subscriptions",
"organizations_url": "https://api.github.com/users/engrussman/orgs",
"repos_url": "https://api.github.com/users/engrussman/repos",
"events_url": "https://api.github.com/users/engrussman/events{/privacy}",
"received_events_url": "https://api.github.com/users/engrussman/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I've two GPUs install but I've not passed any argument to utilize both GPUs\r\n\r\n\r\n+-----------------------------------------------------------------------------+\r\n| NVIDIA-SMI 418.87.01 Driver Version: 418.87.01 CUDA Version: 10.1 |\r\n|-------------------------------+----------------------+----------------------+\r\n| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\r\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\r\n|===============================+======================+======================|\r\n| 0 Tesla V100-PCIE... On | 00000000:00:05.0 Off | Off |\r\n| N/A 39C P0 43W / 250W | 4976MiB / 32480MiB | 0% Default |\r\n+-------------------------------+----------------------+----------------------+\r\n| 1 Tesla V100-PCIE... On | 00000000:00:06.0 Off | Off |\r\n| N/A 35C P0 26W / 250W | 11MiB / 32480MiB | 0% Default |\r\n+-------------------------------+----------------------+----------------------+\r\n\r\n+-----------------------------------------------------------------------------+\r\n| Processes: GPU Memory |\r\n| GPU PID Type Process name Usage |\r\n|=============================================================================|\r\n| 0 26820 C python 1653MiB |\r\n| 0 69641 C python 1657MiB |\r\n| 0 114902 C python 1655MiB |\r\n+-----------------------------------------------------------------------------+\r\n",
"I ran into the same issue. It is not fixed in the newest released version, 2.2.1.",
"Same issue when using multi-gpu. Single gpu case works.\r\n\r\ne.g. running the command as `CUDA_VISIBLE_DEVICES=1 python run_lm_finetuning.py ...`",
"As mentioned by @kalpitdixit using Single GPU works fine but on multiple GPUs, problem persists. ",
"Ok should be fixed on master, thanks!"
] | 1,575 | 1,575 | 1,575 | NONE | null | ## 🐛 AttributeError: 'DataParallel' object has no attribute 'resize_token_embeddings'
<!-- Important information -->
I'm facing AttributeError: 'DataParallel' object has no attribute 'resize_token_embeddings' while performing fine-tuning by using run_lm_finetuning.py.
Following are the arguments:
python run_lm_finetuning.py --train_data_file=sample_text.txt --model_type=gpt2 --model_name_or_path=gpt2 --output_dir=op --mlm --do_train --overwrite_output_dir --do_lower_case --save_steps=50
I tried to change the model but faced same error :
## Detailed error message
Traceback (most recent call last):
File "run_lm_finetuning.py", line 556, in <module>
main()
File "run_lm_finetuning.py", line 508, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "run_lm_finetuning.py", line 218, in train
model.resize_token_embeddings(len(tokenizer))
File "/nipa/anaconda3/envs/nlp/lib/python3.7/site-packages/torch/nn/modules/module.py", line 585, in __getattr__
type(self).__name__, name))
AttributeError: 'DataParallel' object has no attribute 'resize_token_embeddings'
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1991/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1991/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1990 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1990/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1990/comments | https://api.github.com/repos/huggingface/transformers/issues/1990/events | https://github.com/huggingface/transformers/issues/1990 | 530,187,794 | MDU6SXNzdWU1MzAxODc3OTQ= | 1,990 | When training QA models, albert-xxlarge-v2 uses much more GPU mem than Bert-large | {
"login": "fatmelon",
"id": 9691826,
"node_id": "MDQ6VXNlcjk2OTE4MjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9691826?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fatmelon",
"html_url": "https://github.com/fatmelon",
"followers_url": "https://api.github.com/users/fatmelon/followers",
"following_url": "https://api.github.com/users/fatmelon/following{/other_user}",
"gists_url": "https://api.github.com/users/fatmelon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fatmelon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fatmelon/subscriptions",
"organizations_url": "https://api.github.com/users/fatmelon/orgs",
"repos_url": "https://api.github.com/users/fatmelon/repos",
"events_url": "https://api.github.com/users/fatmelon/events{/privacy}",
"received_events_url": "https://api.github.com/users/fatmelon/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"\r\n\r\n\r\n\r\nThe parameters of Albert XXLarge is much less than that of Bert large, because albert used shared parameters in all transformer layers. But it does not reduce computation,Albert xlarge is 1.5 times lower than bert large and Albert xxlarge is 3 times",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"ALBERT repeats the same parameters for each layer but increases each layer size, so even though it has fewer parameters than BERT, the memory needs are greater due to the much larger activations in each layer."
] | 1,575 | 1,632 | 1,581 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
Hi,
When I used run_square.py to train QA model, I found that albert-xxlarge-v2 uses much more GPU mem than Bert-large. Specifically, when using Bert large, I can set `Max_sequence_ength = 512`, `bash_size = 12`. But when I use albert-xlarge-v2, I can only set `Max_sequence_length` = 512, `bash_size = 6`.
In fact, the number of parameters of Albert XXLarge is much less than that of Bert large, and the size of model file is the same. Why does Albert-xxlarge occupy more GPU memory when training QA model? Is it caused by more head parameters? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1990/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/1990/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1989 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1989/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1989/comments | https://api.github.com/repos/huggingface/transformers/issues/1989/events | https://github.com/huggingface/transformers/issues/1989 | 530,162,493 | MDU6SXNzdWU1MzAxNjI0OTM= | 1,989 | Will you add XLNet text-generation feature ? | {
"login": "efeiefei",
"id": 8653223,
"node_id": "MDQ6VXNlcjg2NTMyMjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8653223?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/efeiefei",
"html_url": "https://github.com/efeiefei",
"followers_url": "https://api.github.com/users/efeiefei/followers",
"following_url": "https://api.github.com/users/efeiefei/following{/other_user}",
"gists_url": "https://api.github.com/users/efeiefei/gists{/gist_id}",
"starred_url": "https://api.github.com/users/efeiefei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/efeiefei/subscriptions",
"organizations_url": "https://api.github.com/users/efeiefei/orgs",
"repos_url": "https://api.github.com/users/efeiefei/repos",
"events_url": "https://api.github.com/users/efeiefei/events{/privacy}",
"received_events_url": "https://api.github.com/users/efeiefei/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Not in the short term",
"@thomwolf Thanks a lot",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,575 | 1,581 | 1,581 | NONE | null | ## ❓ Questions & Help
There is `run_generation.py` in example now. Do you have a plan to add feature of complete lm_finetune and inference? Just like GPT-2.
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1989/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1989/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1988 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1988/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1988/comments | https://api.github.com/repos/huggingface/transformers/issues/1988/events | https://github.com/huggingface/transformers/issues/1988 | 530,117,809 | MDU6SXNzdWU1MzAxMTc4MDk= | 1,988 | Possible error in the HuggingFace Transformers documentation? | {
"login": "h56cho",
"id": 52889259,
"node_id": "MDQ6VXNlcjUyODg5MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h56cho",
"html_url": "https://github.com/h56cho",
"followers_url": "https://api.github.com/users/h56cho/followers",
"following_url": "https://api.github.com/users/h56cho/following{/other_user}",
"gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h56cho/subscriptions",
"organizations_url": "https://api.github.com/users/h56cho/orgs",
"repos_url": "https://api.github.com/users/h56cho/repos",
"events_url": "https://api.github.com/users/h56cho/events{/privacy}",
"received_events_url": "https://api.github.com/users/h56cho/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"The documentation seems correct to me. Have you tried out the example code that's been provided? If you run it and check the resulting `lm_prediction_scores`, you'll see its shape is `torch.Size([1, 2, 7, 50258])`, 2 being the length of `choices`.\r\n\r\nThis comment, and the linked blog post, explains the *DoubleHeadsModel pretty well:\r\nhttps://github.com/huggingface/transformers/issues/1794#issuecomment-552627190",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,574 | 1,580 | 1,580 | NONE | null | Hello,
According to HuggingFace Transformers documentation website (https://huggingface.co/transformers/model_doc/gpt2.html#gpt2doubleheadsmodel), under the GPT2DoubleHeadsModel, it defines the output lm_prediction_scores as the following:
`lm_prediction_scores: torch.FloatTensor of shape (batch_size, num_choices, sequence_length, config.vocab_size)`
To me this doesn't make sense. Shouldn't the dimension of `lm_prediction_scores` for `GPT2DoubleHeadsModel` be just `(batch_size, sequence_length, config.vocab_size)`? [no `num_choices` in the middle]
Thank you,
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1988/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1988/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1987 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1987/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1987/comments | https://api.github.com/repos/huggingface/transformers/issues/1987/events | https://github.com/huggingface/transformers/pull/1987 | 530,116,651 | MDExOlB1bGxSZXF1ZXN0MzQ2ODY0NDU4 | 1,987 | Saving and resuming | {
"login": "bilal2vec",
"id": 29356759,
"node_id": "MDQ6VXNlcjI5MzU2NzU5",
"avatar_url": "https://avatars.githubusercontent.com/u/29356759?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bilal2vec",
"html_url": "https://github.com/bilal2vec",
"followers_url": "https://api.github.com/users/bilal2vec/followers",
"following_url": "https://api.github.com/users/bilal2vec/following{/other_user}",
"gists_url": "https://api.github.com/users/bilal2vec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bilal2vec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bilal2vec/subscriptions",
"organizations_url": "https://api.github.com/users/bilal2vec/orgs",
"repos_url": "https://api.github.com/users/bilal2vec/repos",
"events_url": "https://api.github.com/users/bilal2vec/events{/privacy}",
"received_events_url": "https://api.github.com/users/bilal2vec/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1987?src=pr&el=h1) Report\n> Merging [#1987](https://codecov.io/gh/huggingface/transformers/pull/1987?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0cb163865a4c761c226b151283309eedb2b1ca4d?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1987?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1987 +/- ##\n=======================================\n Coverage 82.67% 82.67% \n=======================================\n Files 111 111 \n Lines 16162 16162 \n=======================================\n Hits 13362 13362 \n Misses 2800 2800\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1987?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1987?src=pr&el=footer). Last update [0cb1638...bea4947](https://codecov.io/gh/huggingface/transformers/pull/1987?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thank you for your contribution! I just had a quick look an will give it the time it deserves later. Were you aware of [Pytorch's guidelines](https://pytorch.org/tutorials/beginner/saving_loading_models.html#saving-loading-a-general-checkpoint-for-inference-and-or-resuming-training) when it comes to saving models? I reckon it might simplify your solution. ",
"Did this pull request check the epochs AND steps?\r\nFor instance, my training process stopped in epoch 2 step 6423.\r\nIf I run again it will continue from epoch 2 step 6423?",
"I am no expert, but looking at PyTorch schedulers' code they do keep the current global step in the state. The schedulers should thus continue from the last step. The thing I am wondering about is how to continue from the same data sample.",
"Hi guys,\r\nJust wanted to ask this, Wouldn't too frequent caching to disks slow down the training overall?\r\nWe can have a flag added if the user wants to save every epoch, like ```file_name_{epoch}.pt```.\r\nPlus we can save optimizer etc on the same weights file as well. \r\nPlus allowing users to specify those file names as well should be considered.\r\nThanks.",
"Hi,\r\n\r\n@rlouf, \r\n\r\nSaving the all the checkpoints (model, tokenizer, optimizer, and scheduler) in one file like the pytorch example does would break the `from_pretrained` method, but I could change it to save the optimizer and scheduler in one file instead of two. \r\n\r\nI could change these lines:\r\n\r\n```\r\n# Saving\r\ntorch.save(optimizer.state_dict(), os.path.join(output_dir, 'optimizer.pt'))\r\ntorch.save(scheduler.state_dict(), os.path.join(output_dir, 'scheduler.pt'))\r\n\r\n# Loading\r\noptimizer.load_state_dict(torch.load(os.path.join(args.model_name_or_path, 'optimizer.pt')))\r\nscheduler.load_state_dict(torch.load(os.path.join(args.model_name_or_path, 'scheduler.pt')))\r\n```\r\n\r\nto something like\r\n\r\n```\r\n# Saving\r\ntorch.save({\r\n 'optimizer_state_dict': optimizer.state_dict(),\r\n 'scheduler_state_dict': scheduler.state_dict()\r\n }, os.path.join(output_dir, 'training_state.pt'))\r\n\r\n# Loading\r\ncheckpoint = torch.load(os.path.join(output_dir, 'training_state.pt'))\r\noptimizer.load_state_dict(checkpoint['optimizer_state_dict'])\r\nscheduler.load_state_dict(checkpoint['scheduler_state_dict'])\r\n```\r\n\r\n@marcoaleixo Yes, the code will resume training from the last saved checkpoint. The code saves the model every `--save_steps` training steps and saves the checkpoint in the format `checkpoint-global_step`, so we know exactly which global step the last checkpoint was on. From the global step we can figure out how many epochs have been trained and how many batches should be skipped in the current epoch to continue training from where we left off. The code for this is in [these](https://github.com/bkkaggle/transformers/blob/saving-and-resuming/examples/run_lm_finetuning.py#L230) lines\r\n\r\n@rlouf To make sure you also continue from the last data sample, you would probably have to set a constant random seed in the dataloader and then `continue` through all the epochs and batches until you get to the saved checkpoint, which would take longer, especially if your dataset is very large.\r\n\r\n@AdityaSoni19031997 You can set how often checkpoints are saved using the `--save_steps` flag. Saving the all the checkpoints (model, tokenizer, optimizer, and scheduler) in one file like the pytorch example does would break the `from_pretrained` method. Letting the user choose the file names might make it harder to automatically find and load in the files, and would require new command line parameters.",
"Tell me when this is not WIP anymore and ready to review, ok?",
"Hi @thomwolf, yes this is ready to review",
"@bkkaggle have you compared the training curves of a single run with the training curve of two half-runs with saving/reloading in the middle?",
"Hi @thomwolf, I ran some quick experiments to compare the loss curves. \r\n\r\nThe loss curves are almost, but not exactly identical - probably because the `RandomSampler` doesn't accept a random seed.\r\n\r\nOne more thing: The mean loss at the end of a continued training run won't be the same as a training run completed in one go because the mean loss gets reset to 0 when continuing training. Do you want to find some way around this, maybe by also saving the running loss at each checkpoint? or would doing this add too much complexity for no benefit?\r\n\r\nwandb dashboard with the loss curves: https://app.wandb.ai/bkkaggle/saving-and-resuming?workspace=default\r\n\r\n- `vague_valley_29` is the original 1 epoch training run\r\n- `vital_bird_30` is the same training run, but cancelled after step 100\r\n- `fresh-feather-34` is the training run resumed from `vital_bird_30`'s step 100 checkpoint",
"Ok, yes full determinism can be a rabbit-hole.\r\nI think it's good to merge for me.\r\nOk for you as well @LysandreJik?",
"Yes, looks good to me!",
"I just finished updating the other pytorch examples: `run_xnli.py`, `run_ner.py`, `run_squad.py`, and `run_glue`. I pushed them up to my [branch](https://github.com/bkkaggle/transformers/commits/saving-and-resuming) and If you want, I can open another pull request for them."
] | 1,574 | 1,575 | 1,575 | CONTRIBUTOR | null | Here's my basic implementation of the saving and resuming improvements discussed in #1960. So far, I've only modified the `run_lm_finetuning` example, but if my changes are approved I can update the rest of the examples as well.
There are three main changes:
1. The example now saves the optimizer, scheduler, and tokenizer every `save_steps` iterations.
2. The example now checks whether training is being continued from a checkpoint, and if so, looks for a saved optimizer and scheduler and loads them in.
3. The example checks whether training is being continued from a checkpoint, and if so, gets the global step of the checkpoint and continues training from the last saved global step. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1987/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1987/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1987",
"html_url": "https://github.com/huggingface/transformers/pull/1987",
"diff_url": "https://github.com/huggingface/transformers/pull/1987.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1987.patch",
"merged_at": 1575926676000
} |
https://api.github.com/repos/huggingface/transformers/issues/1986 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1986/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1986/comments | https://api.github.com/repos/huggingface/transformers/issues/1986/events | https://github.com/huggingface/transformers/issues/1986 | 530,106,494 | MDU6SXNzdWU1MzAxMDY0OTQ= | 1,986 | Fine Tuning Bert for Q&A | {
"login": "priteshpatel15",
"id": 7561895,
"node_id": "MDQ6VXNlcjc1NjE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7561895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/priteshpatel15",
"html_url": "https://github.com/priteshpatel15",
"followers_url": "https://api.github.com/users/priteshpatel15/followers",
"following_url": "https://api.github.com/users/priteshpatel15/following{/other_user}",
"gists_url": "https://api.github.com/users/priteshpatel15/gists{/gist_id}",
"starred_url": "https://api.github.com/users/priteshpatel15/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/priteshpatel15/subscriptions",
"organizations_url": "https://api.github.com/users/priteshpatel15/orgs",
"repos_url": "https://api.github.com/users/priteshpatel15/repos",
"events_url": "https://api.github.com/users/priteshpatel15/events{/privacy}",
"received_events_url": "https://api.github.com/users/priteshpatel15/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"A quick workaround would be appending your data to the SQUAD training dataset and doing the fine-tuning as usual.\r\n",
"Yes thats an approach. I've been reading a bit more about GPT and GPT-2 ... i'm wondering if I could use the generative approach with fine-tuning on a specific task that would help with SQUAD Q&A ability for my target domain? What are people's thoughts. Does the math work out?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"> This is more of a theoretical question. I would like to use a Bert Model trained on SQUAD 2.0 then train it further on my domain's Q&A dataset.\r\n> \r\n> How would I do that. I've read through the code. As I understand it, I see that the BertforQuestionAnswering would be what I would need to use loaded with a model that is fine tuned on Squad so the weights match the architecture.\r\n> \r\n> But now I want to further fine tune this model to include Q&A training data from my target domain. How would I do that?\r\n\r\nHi @priteshpatel15 I am interested in this problem. Have you found a more elegant solution than what Matthew suggested above?\r\n"
] | 1,574 | 1,592 | 1,580 | NONE | null | This is more of a theoretical question. I would like to use a Bert Model trained on SQUAD 2.0 then train it further on my domain's Q&A dataset.
How would I do that. I've read through the code. As I understand it, I see that the BertforQuestionAnswering would be what I would need to use loaded with a model that is fine tuned on Squad so the weights match the architecture.
But now I want to further fine tune this model to include Q&A training data from my target domain. How would I do that? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1986/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1986/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1985 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1985/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1985/comments | https://api.github.com/repos/huggingface/transformers/issues/1985/events | https://github.com/huggingface/transformers/issues/1985 | 530,093,716 | MDU6SXNzdWU1MzAwOTM3MTY= | 1,985 | run_squad.py for tf | {
"login": "RodSernaPerez",
"id": 37450380,
"node_id": "MDQ6VXNlcjM3NDUwMzgw",
"avatar_url": "https://avatars.githubusercontent.com/u/37450380?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RodSernaPerez",
"html_url": "https://github.com/RodSernaPerez",
"followers_url": "https://api.github.com/users/RodSernaPerez/followers",
"following_url": "https://api.github.com/users/RodSernaPerez/following{/other_user}",
"gists_url": "https://api.github.com/users/RodSernaPerez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RodSernaPerez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RodSernaPerez/subscriptions",
"organizations_url": "https://api.github.com/users/RodSernaPerez/orgs",
"repos_url": "https://api.github.com/users/RodSernaPerez/repos",
"events_url": "https://api.github.com/users/RodSernaPerez/events{/privacy}",
"received_events_url": "https://api.github.com/users/RodSernaPerez/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Not yet, but it's on the roadmap."
] | 1,574 | 1,575 | 1,575 | NONE | null | Is there any version of the script for fine tunning on Squad using tensorflow? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1985/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1985/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1984 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1984/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1984/comments | https://api.github.com/repos/huggingface/transformers/issues/1984/events | https://github.com/huggingface/transformers/pull/1984 | 530,091,802 | MDExOlB1bGxSZXF1ZXN0MzQ2ODQ1OTI5 | 1,984 | [WIP] Squad refactor | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1984?src=pr&el=h1) Report\n> Merging [#1984](https://codecov.io/gh/huggingface/transformers/pull/1984?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0cb163865a4c761c226b151283309eedb2b1ca4d?src=pr&el=desc) will **decrease** coverage by `3.11%`.\n> The diff coverage is `17.42%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1984?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1984 +/- ##\n==========================================\n- Coverage 82.67% 79.56% -3.12% \n==========================================\n Files 111 113 +2 \n Lines 16162 16969 +807 \n==========================================\n+ Hits 13362 13501 +139 \n- Misses 2800 3468 +668\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1984?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/data/metrics/squad\\_metrics.py](https://codecov.io/gh/huggingface/transformers/pull/1984/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2RhdGEvbWV0cmljcy9zcXVhZF9tZXRyaWNzLnB5) | `0% <0%> (ø)` | |\n| [transformers/tests/tokenization\\_tests\\_commons.py](https://codecov.io/gh/huggingface/transformers/pull/1984/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3Rva2VuaXphdGlvbl90ZXN0c19jb21tb25zLnB5) | `100% <100%> (ø)` | :arrow_up: |\n| [transformers/data/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/1984/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2RhdGEvX19pbml0X18ucHk=) | `100% <100%> (ø)` | :arrow_up: |\n| [transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/1984/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl94bG5ldC5weQ==) | `90.4% <100%> (+0.15%)` | :arrow_up: |\n| [transformers/data/processors/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/1984/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2RhdGEvcHJvY2Vzc29ycy9fX2luaXRfXy5weQ==) | `100% <100%> (ø)` | :arrow_up: |\n| [transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1984/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl91dGlscy5weQ==) | `91.02% <100%> (+0.55%)` | :arrow_up: |\n| [transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/1984/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2RhdGEvcHJvY2Vzc29ycy9zcXVhZC5weQ==) | `14.7% <14.7%> (ø)` | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1984?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1984?src=pr&el=footer). Last update [0cb1638...2a4ef09](https://codecov.io/gh/huggingface/transformers/pull/1984?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Should be about ready to merge now. ~I'm reproducing the results paper results on XLNet, BERT, XLM, RoBERTa, DistilBERT and ALBERT to make sure it works as expected, then I'll rebase and call for review.~\r\n\r\nI made sure that I obtain somewhat the same results with fine-tuning + evaluating models with the old and new scripts. I could confirm it is the case for DistilBERT, BERT and XLNet.",
"Ok this is great, merging!\r\n\r\n@LysandreJik do you want to have a quick look at the two comments I wrote about doc/docstring and the incomplete warning?\r\n\r\nMerging now because these are small tweak and @mfuntowicz need this PR for his PR so I'll let you push a doc commit on master directly maybe."
] | 1,574 | 1,576 | 1,575 | MEMBER | null | This PR aims to refactor SQuAD to make it usable with all models with question answering heads, and without having to build the entire tokenization pipeline as it is currently done.
- It is based on processors that manage data, similarly to the GLUE processors. The two new processors are `SquadV1Processor` and `SquadV2Processor`. They'll probably be merged into a single `SquadProcessor` as the difference between the two versions is minimal.
- It leverages powerful abstractions made for the `run_glue` refactor a few months ago that greatly simplified the tokenization pipeline
- It can be interfaced with the package `tensorflow_datasets`.
- It better respects the library-wide naming, with `attention_mask` instead of `input_mask` and `token_type_ids` instead of `segment_ids`, among others.
- Introduces padding to `encode` and `encode_plus`, alongside tests.
It is still a work on progress but some aspects of it are working.
### Left to do
- [x] Add the processors to `__init__.py`
- [x] Patch the evaluation so that it leverages the current interface
- [x] Patch the evaluation so that it may work with tfds
- [x] Modify the run arguments to reflect the changes
- [x] Remove the `only_first` argument which would only be used for testing
- [x] Update tests running the `run_squad.py` script
- [x] Include the padding location in the tokenizers and reflect the changes in the feature converter
- [x] Test that all current models can train and evaluate (BERT, RoBERTa, XLNet, XLM)
- [x] Add the last models (DistilBERT, ALBERT, ...)
- [x] Return datasets (maybe only pytorch TensorDataset for now)
- [x] Documentation
- [x] Short examples showcasing the simply usage in the processors section.
- [x] Patch the evaluation for impossible questions
### Running sample
Here's the major difference from the user's perspective. Initially, to obtain the examples which were then converted to features, the user had to do as follows (taken from the current `run_squad.py`), which only works for BERT/XLNet/DistilBERT/ALBERT:
```py
examples = read_squad_examples(
input_file=input_file,
is_training=not evaluate,
version_2_with_negative=args.version_2_with_negative
)
features = convert_examples_to_features(
examples=examples,
tokenizer=tokenizer,
max_seq_length=args.max_seq_length,
doc_stride=args.doc_stride,
max_query_length=args.max_query_length,
is_training=not evaluate,
cls_token_segment_id=2 if args.model_type in ['xlnet'] else 0,
pad_token_segment_id=3 if args.model_type in ['xlnet'] else 0,
cls_token_at_end=True if args.model_type in ['xlnet'] else False,
sequence_a_is_doc=True if args.model_type in ['xlnet'] else False
)
```
In order to obtain the exact same results, the user now has to do as follows, which will be completely model independant once the `sequence_a_is_doc` is integrated in our sequence pair tokenization methods:
```py
processor = SquadV1Processor()
examples = processor.get_dev_examples("examples/squad") if evaluate else processor.get_train_examples("examples/squad")
features = squad_convert_examples_to_features(
examples=examples,
tokenizer=tokenizer,
max_seq_length=args.max_seq_length,
doc_stride=args.doc_stride,
max_query_length=args.max_query_length,
is_training=not evaluate,
sequence_a_is_doc=True if args.model_type in ['xlnet'] else False
)
```
The same can be done by using TFDS instead, removing the need to specify a file. The two initial lines now become:
```py
tfds_examples = tensorflow_datasets.load("squad")["validation"] if evaluate else tensorflow_datasets.load("squad")["train"]
examples = SquadV1Processor().get_examples_from_dataset(tfds_examples)
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1984/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1984/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1984",
"html_url": "https://github.com/huggingface/transformers/pull/1984",
"diff_url": "https://github.com/huggingface/transformers/pull/1984.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1984.patch",
"merged_at": 1575972447000
} |
https://api.github.com/repos/huggingface/transformers/issues/1983 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1983/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1983/comments | https://api.github.com/repos/huggingface/transformers/issues/1983/events | https://github.com/huggingface/transformers/issues/1983 | 530,075,921 | MDU6SXNzdWU1MzAwNzU5MjE= | 1,983 | add special tokens | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Do you load everything (model, data) on GPU?\r\n\r\n> Hello\r\n> I tried to add special tokens to bert tokenizer via add_special_tokens:\r\n> \r\n> ```\r\n> tokenizer.add_special_tokens({'additional_special_tokens':['SS']})\r\n> ```\r\n> \r\n> But I got CUDA error\r\n> \r\n> ```\r\n> CUDA error: device-side assert triggered\r\n> ```\r\n> \r\n> The code runs without adding additional_special_tokens!\r\n> Any idea?",
"Yes, I did. The code runs without this line : \r\n```\r\ntokenizer.add_special_tokens({'additional_special_tokens':['SS']})\r\n```\r\nDo you think it is a resourcse issue? \r\nThanks ",
"I got this lately: \r\n```\r\n/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in linear(input, weight, bias)\r\n 1370 ret = torch.addmm(bias, input, weight.t())\r\n 1371 else:\r\n-> 1372 output = input.matmul(weight.t())\r\n 1373 if bias is not None:\r\n 1374 output += bias\r\n\r\nRuntimeError: cublas runtime error : resource allocation failed at /pytorch/aten/src/THC/THCGeneral.cpp:216\r\n```",
"Are you changing the #vocab count in the config as well?\r\n",
"No, I did not. ",
"you can check them with``` tokenizer.convert_ids_to_tokens(id)``` (I don’t remember exactly, but I think, from 100 to 1000 are free, maybe, from 5 to 1000 even free, crosscheck please, also it depends on the \"case\" of the model)\r\nGenerally there's a pack in the beginning and then somewhere in the between we have these free unused tokens..!",
"Try this,\r\n\r\n```\r\n### Let's load a model and tokenizer\r\nmodel = BertForSequenceClassification.from_pretrained('bert-base-uncased')\r\ntokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\n\r\n### Do some stuff to our model and tokenizer\r\n# Ex: add new tokens to the vocabulary and embeddings of our model\r\ntokenizer.add_tokens(['[SPECIAL_TOKEN_1]', '[SPECIAL_TOKEN_2]'])\r\nmodel.resize_token_embeddings(len(tokenizer))\r\n# Train our model\r\ntrain(model)\r\n\r\n### Now let's save our model and tokenizer to a directory\r\nmodel.save_pretrained('./my_saved_model_directory/')\r\ntokenizer.save_pretrained('./my_saved_model_directory/')\r\n\r\n### Reload the model and the tokenizer\r\nmodel = BertForSequenceClassification.from_pretrained('./my_saved_model_directory/')\r\ntokenizer = BertTokenizer.from_pretrained('./my_saved_model_directory/')\r\n```",
"Thank you, \r\nI missed this line. Silly mistake \r\n```\r\nmodel.resize_token_embeddings(len(tokenizer))\r\n```\r\nit worked!",
"You can close the issue :)"
] | 1,574 | 1,575 | 1,575 | NONE | null | Hello
I tried to add special tokens to bert tokenizer via add_special_tokens:
```
tokenizer.add_special_tokens({'additional_special_tokens':['SS']})
```
But I got CUDA error
```
CUDA error: device-side assert triggered
```
The code runs without adding additional_special_tokens!
Any idea? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1983/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1983/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1981 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1981/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1981/comments | https://api.github.com/repos/huggingface/transformers/issues/1981/events | https://github.com/huggingface/transformers/issues/1981 | 530,000,684 | MDU6SXNzdWU1MzAwMDA2ODQ= | 1,981 | Transformers for WebNLG tasks | {
"login": "MathewAlexander",
"id": 36654272,
"node_id": "MDQ6VXNlcjM2NjU0Mjcy",
"avatar_url": "https://avatars.githubusercontent.com/u/36654272?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MathewAlexander",
"html_url": "https://github.com/MathewAlexander",
"followers_url": "https://api.github.com/users/MathewAlexander/followers",
"following_url": "https://api.github.com/users/MathewAlexander/following{/other_user}",
"gists_url": "https://api.github.com/users/MathewAlexander/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MathewAlexander/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MathewAlexander/subscriptions",
"organizations_url": "https://api.github.com/users/MathewAlexander/orgs",
"repos_url": "https://api.github.com/users/MathewAlexander/repos",
"events_url": "https://api.github.com/users/MathewAlexander/events{/privacy}",
"received_events_url": "https://api.github.com/users/MathewAlexander/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Actually I'm working on this right now. Interested to know as well if anyone else has done it.\r\nMost probably this is possible.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,574 | 1,581 | 1,581 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
Can we leverage GPT-2 pre-trained model for WebNLG tasks ?http://webnlg.loria.fr/pages/challenge.html
The WebNLG challenge consists in mapping data to text
similar to what is being done in https://github.com/tyliupku/wiki2bio. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1981/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/1981/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1980 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1980/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1980/comments | https://api.github.com/repos/huggingface/transformers/issues/1980/events | https://github.com/huggingface/transformers/pull/1980 | 529,957,107 | MDExOlB1bGxSZXF1ZXN0MzQ2NzM3Njgy | 1,980 | update all tf.shape and tensor.shape to shape_list | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1980?src=pr&el=h1) Report\n> :exclamation: No coverage uploaded for pull request base (`master@49a69d5`). [Click here to learn what that means](https://docs.codecov.io/docs/error-reference#section-missing-base-commit).\n> The diff coverage is `90.69%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1980?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1980 +/- ##\n=========================================\n Coverage ? 84.05% \n=========================================\n Files ? 105 \n Lines ? 15533 \n Branches ? 0 \n=========================================\n Hits ? 13056 \n Misses ? 2477 \n Partials ? 0\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1980?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/modeling\\_tf\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/1980/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3RyYW5zZm9feGxfdXRpbGl0aWVzLnB5) | `85.57% <100%> (ø)` | |\n| [transformers/modeling\\_tf\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/1980/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX2N0cmwucHk=) | `97.86% <100%> (ø)` | |\n| [transformers/modeling\\_tf\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/1980/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3RyYW5zZm9feGwucHk=) | `92.54% <100%> (ø)` | |\n| [transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/1980/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX2Rpc3RpbGJlcnQucHk=) | `98.66% <100%> (ø)` | |\n| [transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1980/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3JvYmVydGEucHk=) | `90.43% <100%> (ø)` | |\n| [transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/1980/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3hsbmV0LnB5) | `88.16% <100%> (ø)` | |\n| [transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/1980/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX2dwdDIucHk=) | `94.75% <100%> (ø)` | |\n| [transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/1980/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX29wZW5haS5weQ==) | `95.92% <100%> (ø)` | |\n| [transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/1980/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX2FsYmVydC5weQ==) | `85.26% <81.81%> (ø)` | |\n| [transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/1980/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX2JlcnQucHk=) | `96.31% <84.61%> (ø)` | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1980?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1980?src=pr&el=footer). Last update [49a69d5...255516a](https://codecov.io/gh/huggingface/transformers/pull/1980?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Very nice!"
] | 1,574 | 1,651 | 1,575 | MEMBER | null | We need to use the special method `shape_list` from `modeling_tf_utils` to be sure we can get TF 2.0 tensor shapes both in eager and non-eager mode.
This PR fixes this for all TF 2.0 models and templates. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1980/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1980/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1980",
"html_url": "https://github.com/huggingface/transformers/pull/1980",
"diff_url": "https://github.com/huggingface/transformers/pull/1980.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1980.patch",
"merged_at": 1575038451000
} |
https://api.github.com/repos/huggingface/transformers/issues/1979 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1979/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1979/comments | https://api.github.com/repos/huggingface/transformers/issues/1979/events | https://github.com/huggingface/transformers/issues/1979 | 529,954,270 | MDU6SXNzdWU1Mjk5NTQyNzA= | 1,979 | AlbertForQuestionAnswering | {
"login": "garkavem",
"id": 33484321,
"node_id": "MDQ6VXNlcjMzNDg0MzIx",
"avatar_url": "https://avatars.githubusercontent.com/u/33484321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/garkavem",
"html_url": "https://github.com/garkavem",
"followers_url": "https://api.github.com/users/garkavem/followers",
"following_url": "https://api.github.com/users/garkavem/following{/other_user}",
"gists_url": "https://api.github.com/users/garkavem/gists{/gist_id}",
"starred_url": "https://api.github.com/users/garkavem/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/garkavem/subscriptions",
"organizations_url": "https://api.github.com/users/garkavem/orgs",
"repos_url": "https://api.github.com/users/garkavem/repos",
"events_url": "https://api.github.com/users/garkavem/events{/privacy}",
"received_events_url": "https://api.github.com/users/garkavem/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! The `albert` checkpoints only include the base model (the transformer model), and not the separate heads for each task (classification/question answering/...).\r\n\r\nFor question answering, you would have to first fine-tune the model to this specific task, as the question answering head is initialized randomly. You can do so with the `run_squad.py` example.",
"It should be explained in that example, thank you for raising this issue! I'll change that.",
"Ok! Thanks a lot!",
"It would be really nice is you can release pretrained checkpoints for the specific tasks... I know its a big ask but it would save so many watts of energy all over the world....",
"The model need to finetune for downstream task is very general and\ntask-agnostic, if released the specific task , what extra thing u need to\ndo ? Also if released, it is not called “pretrained model” , all training\nprocess finished.....\n\nOn Sat, Nov 30, 2019 at 15:58 mosheliv <[email protected]> wrote:\n\n> It would be really nice is you can release pretrained checkpoints for the\n> specific tasks... I know its a big ask but it would save so many watts of\n> energy all over the world....\n>\n> —\n> You are receiving this because you are subscribed to this thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/1979?email_source=notifications&email_token=AIEAE4ESDYPMYNVNJWN266DQWIMLFA5CNFSM4JSVQEGKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEFP377Y#issuecomment-559923199>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AIEAE4ASDKKVFD3ZVMA4WYDQWIMLFANCNFSM4JSVQEGA>\n> .\n>\n",
"I mean fine tuned for squad 2, for example. I would like to play with its\ncapabilities but the fine tuning process is a tad daunting....\n\nOn Sat, Nov 30, 2019, 21:22 pohan <[email protected]> wrote:\n\n> The model need to finetune for downstream task is very general and\n> task-agnostic, if released the specific task , what extra thing u need to\n> do ? Also if released, it is not called “pretrained model” , all training\n> process finished.....\n>\n> On Sat, Nov 30, 2019 at 15:58 mosheliv <[email protected]> wrote:\n>\n> > It would be really nice is you can release pretrained checkpoints for the\n> > specific tasks... I know its a big ask but it would save so many watts of\n> > energy all over the world....\n> >\n> > —\n> > You are receiving this because you are subscribed to this thread.\n> > Reply to this email directly, view it on GitHub\n> > <\n> https://github.com/huggingface/transformers/issues/1979?email_source=notifications&email_token=AIEAE4ESDYPMYNVNJWN266DQWIMLFA5CNFSM4JSVQEGKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEFP377Y#issuecomment-559923199\n> >,\n> > or unsubscribe\n> > <\n> https://github.com/notifications/unsubscribe-auth/AIEAE4ASDKKVFD3ZVMA4WYDQWIMLFANCNFSM4JSVQEGA\n> >\n> > .\n> >\n>\n> —\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/1979?email_source=notifications&email_token=AC7IWC66W7KKINHNFXJETBLQWIPD5A5CNFSM4JSVQEGKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEFP4Q7A#issuecomment-559925372>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AC7IWC65E5R33BTHMHSYJWLQWIPD5ANCNFSM4JSVQEGA>\n> .\n>\n",
"I totally agree that it would be nice to have the weights for Albert finetuned on Squad available. ",
"I have found a facebook model pretrained (oh sorry, fine tuned :) on squad2.0 in https://github.com/facebookresearch/SpanBERT.\r\nit is compatible with the huggingface models, so you can get get it with:\r\n`wget http://dl.fbaipublicfiles.com/fairseq/models/spanbert_squad2.tar.gz`\r\nand extract it into say, directory spanbert\r\nI use it something like:\r\n```\r\nimport torch \r\nfrom transformers import BertTokenizer, BertForQuestionAnswering\r\ntokenizer = BertTokenizer.from_pretrained('bert-base-cased')\r\nmodel = BertForQuestionAnswering.from_pretrained('./spanbert')\r\nq = \"who am i?\"\r\ndoc = \"my name is slim shady\"\r\ninput_text = \"[CLS] \" + q+ \" [SEP] \" + doc + \" [SEP]\"\r\ninput_ids = tokenizer.encode(input_text)\r\ntoken_type_ids = [0 if i <= input_ids.index(102) else 1 for i in range(len(input_ids))]\r\nstart_scores, end_scores = model(torch.tensor([input_ids]), token_type_ids=torch.tensor([token_type_ids]))\r\nall_tokens = tokenizer.convert_ids_to_tokens(input_ids)\r\nres = all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1]\r\nif not res or res[0] == \"[CLS]\":\r\n print(\"MISSING\")\r\nelse:\r\n prev_token = \"\"\r\n for i, t in enumerate(res):\r\n if t.startswith(\"##\"):\r\n res[i-1] += t[2:]\r\n res[i] = \"\"\r\n print(\" \".join([x for x in res if x != \"\"]))\r\n```\r\nI am including the snipped here as it is so hard to find minimal activations of bert on single entries, especially for Q&A\r\n",
"Thanks a lot!",
"@mosheliv - isn't that just for bert, not albert?\r\n",
"Yes, it is, but it was the only squad2 pre-trained i could find.\n\nOn Thu, Dec 5, 2019, 07:40 Mark Feblowitz <[email protected]> wrote:\n\n> @mosheliv <https://github.com/mosheliv> - isn't that just for bert, not\n> albert?\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/1979?email_source=notifications&email_token=AC7IWC3RQUZWCNR34BFZVHTQW72QFA5CNFSM4JSVQEGKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEF6B5II#issuecomment-561782433>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AC7IWC3TJXCV3RN46RWFRX3QW72QFANCNFSM4JSVQEGA>\n> .\n>\n",
"> I have found a facebook model pretrained (oh sorry, fine tuned :) on squad2.0 in https://github.com/facebookresearch/SpanBERT.\r\n> it is compatible with the huggingface models, so you can get get it with:\r\n> `wget http://dl.fbaipublicfiles.com/fairseq/models/spanbert_squad2.tar.gz`\r\n> and extract it into say, directory spanbert\r\n> I use it something like:\r\n> \r\n> ```\r\n> import torch \r\n> from transformers import BertTokenizer, BertForQuestionAnswering\r\n> tokenizer = BertTokenizer.from_pretrained('bert-base-cased')\r\n> model = BertForQuestionAnswering.from_pretrained('./spanbert')\r\n> q = \"who am i?\"\r\n> doc = \"my name is slim shady\"\r\n> input_text = \"[CLS] \" + q+ \" [SEP] \" + doc + \" [SEP]\"\r\n> input_ids = tokenizer.encode(input_text)\r\n> token_type_ids = [0 if i <= input_ids.index(102) else 1 for i in range(len(input_ids))]\r\n> start_scores, end_scores = model(torch.tensor([input_ids]), token_type_ids=torch.tensor([token_type_ids]))\r\n> all_tokens = tokenizer.convert_ids_to_tokens(input_ids)\r\n> res = all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1]\r\n> if not res or res[0] == \"[CLS]\":\r\n> print(\"MISSING\")\r\n> else:\r\n> prev_token = \"\"\r\n> for i, t in enumerate(res):\r\n> if t.startswith(\"##\"):\r\n> res[i-1] += t[2:]\r\n> res[i] = \"\"\r\n> print(\" \".join([x for x in res if x != \"\"]))\r\n> ```\r\n> \r\n> I am including the snipped here as it is so hard to find minimal activations of bert on single entries, especially for Q&A\r\n\r\nCan we assume that whenever there's a `[CLS]` in the answer, it basically means no answer? I'm asking since I know depending on how we treat such cases, it can affect the performance evaluation. Please take a look at my question asked [here on SO](https://stackoverflow.com/questions/60133236/what-does-berts-special-characters-appearance-in-squads-qa-answers-mean).\r\n\r\nAlso for folks who might be looking for a running example of fine-tuned ALBERT on SQuAD v2.0, you might find this helpful:\r\n\r\n```\r\nfrom transformers import AutoTokenizer, AutoModelForQuestionAnswering\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"ktrapeznikov/albert-xlarge-v2-squad-v2\")\r\nmodel = AutoModelForQuestionAnswering.from_pretrained(\"ktrapeznikov/albert-xlarge-v2-squad-v2\")\r\nquestion = \"Where is the capital of the USA?\"\r\ntext = \"Capital of the USA is the beautiful Washington D.C.\"\r\n\r\ninput_dict = tokenizer.encode_plus(question, text, return_tensors=\"pt\")\r\ninput_ids = input_dict[\"input_ids\"].tolist()\r\nstart_scores, end_scores = model(**input_dict)\r\n\r\nall_tokens = tokenizer.convert_ids_to_tokens(input_ids[0])\r\nanswer = ''.join(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1]).replace('▁', ' ').strip()\r\nprint(answer)\r\n```\r\n",
"No expert on this model but yes, this is how I used it.\nThanks for the albert, will try it later on!\n\nOn Sun, Feb 9, 2020, 16:56 Pedram <[email protected]> wrote:\n\n> I have found a facebook model pretrained (oh sorry, fine tuned :) on\n> squad2.0 in https://github.com/facebookresearch/SpanBERT.\n> it is compatible with the huggingface models, so you can get get it with:\n> wget http://dl.fbaipublicfiles.com/fairseq/models/spanbert_squad2.tar.gz\n> and extract it into say, directory spanbert\n> I use it something like:\n>\n> import torch\n>\n> from transformers import BertTokenizer, BertForQuestionAnswering\n>\n> tokenizer = BertTokenizer.from_pretrained('bert-base-cased')\n>\n> model = BertForQuestionAnswering.from_pretrained('./spanbert')\n>\n> q = \"who am i?\"\n>\n> doc = \"my name is slim shady\"\n>\n> input_text = \"[CLS] \" + q+ \" [SEP] \" + doc + \" [SEP]\"\n>\n> input_ids = tokenizer.encode(input_text)\n>\n> token_type_ids = [0 if i <= input_ids.index(102) else 1 for i in range(len(input_ids))]\n>\n> start_scores, end_scores = model(torch.tensor([input_ids]), token_type_ids=torch.tensor([token_type_ids]))\n>\n> all_tokens = tokenizer.convert_ids_to_tokens(input_ids)\n>\n> res = all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1]\n>\n> if not res or res[0] == \"[CLS]\":\n>\n> print(\"MISSING\")\n>\n> else:\n>\n> prev_token = \"\"\n>\n> for i, t in enumerate(res):\n>\n> if t.startswith(\"##\"):\n>\n> res[i-1] += t[2:]\n>\n> res[i] = \"\"\n>\n> print(\" \".join([x for x in res if x != \"\"]))\n>\n>\n> I am including the snipped here as it is so hard to find minimal\n> activations of bert on single entries, especially for Q&A\n>\n> Can we assume that whenever there's a [CLS] in the answer, it basically\n> means no answer? I'm asking since I know depending on how we treat such\n> cases, it can affect the performance evaluation. Please see take a look at\n> my question asked here on SO\n> <https://stackoverflow.com/questions/60133236/what-does-berts-special-characters-appearance-in-squads-qa-answers-mean>\n> .\n>\n> Also for folks who might be looking for a running example of fine-tuned\n> ALBERT on SQuAD v2.0, you might find this helpful:\n>\n> from transformers import AutoTokenizer, AutoModelForQuestionAnswering\n>\n>\n>\n> tokenizer = AutoTokenizer.from_pretrained(\"ktrapeznikov/albert-xlarge-v2-squad-v2\")\n>\n> model = AutoModelForQuestionAnswering.from_pretrained(\"ktrapeznikov/albert-xlarge-v2-squad-v2\")\n>\n> question = \"Where is the capital of the USA?\"\n>\n> text = \"The capital of the USA is beautiful Washington D.C.\"\n>\n>\n>\n> input_dict = tokenizer.encode_plus(question, text, return_tensors=\"pt\")\n>\n> input_ids = input_dict[\"input_ids\"].tolist()\n>\n> start_scores, end_scores = model(**input_dict)\n>\n>\n>\n> all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0])\n>\n> answer = ' '.join(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1]).replace('▁', '')\n>\n> print(answer)\n>\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/1979?email_source=notifications&email_token=AC7IWCYV3DSSF2HRGDQDFWLRB55FLA5CNFSM4JSVQEGKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOELGBZ2A#issuecomment-583802088>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AC7IWC6KKB4S4UBYSQGQ3M3RB55FLANCNFSM4JSVQEGA>\n> .\n>\n",
"> > I have found a facebook model pretrained (oh sorry, fine tuned :) on squad2.0 in https://github.com/facebookresearch/SpanBERT.\r\n> > it is compatible with the huggingface models, so you can get get it with:\r\n> > `wget http://dl.fbaipublicfiles.com/fairseq/models/spanbert_squad2.tar.gz`\r\n> > and extract it into say, directory spanbert\r\n> > I use it something like:\r\n> > ```\r\n> > import torch \r\n> > from transformers import BertTokenizer, BertForQuestionAnswering\r\n> > tokenizer = BertTokenizer.from_pretrained('bert-base-cased')\r\n> > model = BertForQuestionAnswering.from_pretrained('./spanbert')\r\n> > q = \"who am i?\"\r\n> > doc = \"my name is slim shady\"\r\n> > input_text = \"[CLS] \" + q+ \" [SEP] \" + doc + \" [SEP]\"\r\n> > input_ids = tokenizer.encode(input_text)\r\n> > token_type_ids = [0 if i <= input_ids.index(102) else 1 for i in range(len(input_ids))]\r\n> > start_scores, end_scores = model(torch.tensor([input_ids]), token_type_ids=torch.tensor([token_type_ids]))\r\n> > all_tokens = tokenizer.convert_ids_to_tokens(input_ids)\r\n> > res = all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1]\r\n> > if not res or res[0] == \"[CLS]\":\r\n> > print(\"MISSING\")\r\n> > else:\r\n> > prev_token = \"\"\r\n> > for i, t in enumerate(res):\r\n> > if t.startswith(\"##\"):\r\n> > res[i-1] += t[2:]\r\n> > res[i] = \"\"\r\n> > print(\" \".join([x for x in res if x != \"\"]))\r\n> > ```\r\n> > \r\n> > \r\n> > I am including the snipped here as it is so hard to find minimal activations of bert on single entries, especially for Q&A\r\n> \r\n> Can we assume that whenever there's a `[CLS]` in the answer, it basically means no answer? I'm asking since I know depending on how we treat such cases, it can affect the performance evaluation. Please see take a look at my question asked [here on SO](https://stackoverflow.com/questions/60133236/what-does-berts-special-characters-appearance-in-squads-qa-answers-mean).\r\n> \r\n> Also for folks who might be looking for a running example of fine-tuned ALBERT on SQuAD v2.0, you might find this helpful:\r\n> \r\n> ```\r\n> from transformers import AutoTokenizer, AutoModelForQuestionAnswering\r\n> \r\n> tokenizer = AutoTokenizer.from_pretrained(\"ktrapeznikov/albert-xlarge-v2-squad-v2\")\r\n> model = AutoModelForQuestionAnswering.from_pretrained(\"ktrapeznikov/albert-xlarge-v2-squad-v2\")\r\n> question = \"Where is the capital of the USA?\"\r\n> text = \"Capital of the USA is the beautiful Washington D.C.\"\r\n> \r\n> input_dict = tokenizer.encode_plus(question, text, return_tensors=\"pt\")\r\n> input_ids = input_dict[\"input_ids\"].tolist()\r\n> start_scores, end_scores = model(**input_dict)\r\n> \r\n> all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0])\r\n> answer = ''.join(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1]).replace('▁', ' ').strip()\r\n> print(answer)\r\n> ```\r\n\r\nHi! Thanks for this, I'm only a beginner and this really saved me a lot of trouble! I had a small question however. Apparently the page for this model [https://huggingface.co/ktrapeznikov/albert-xlarge-v2-squad-v2] shows there is a way to get the 'scores' of the spans in addition to getting an answer but I couldn't get it to work myself. The code is supposed to be on the lines of: \r\n\r\n```\r\nstart_scores, end_scores = model(input_ids) \r\nspan_scores = start_scores.softmax(dim=1).log()[:,:,None] + end_scores.softmax(dim=1).log()[:,None,:]\r\nignore_score = span_scores[:,0,0] #no answer scores\r\n```\r\n\r\nBut this doesn't return a single score. What am I missing? ",
"@desaibhargav probably a little late for this but you can get the answers scores like so:\r\n\r\n answer_start = torch.argmax(start_scores) # get the most likely beginning of answer with the argmax of the score\r\n answer_end = torch.argmax(end_scores) + 1\r\n answer_span = inputs[\"input_ids\"][0][answer_start:answer_end]\r\n answer_tokens = tokenizer.convert_ids_to_tokens(answer_span)\r\n tokenizer.convert_tokens_to_string(answer_tokens)\r\n\r\nThis converts the answer spans to the answer. However, I'm not sure how to ignore score my best guess is that its filtered based off of some threshold\r\n"
] | 1,574 | 1,612 | 1,575 | NONE | null | Hello! Thanks for adding Albert so quickly! I have a problem with Albert answering a simple question from the Huggingface default example:
```
tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')
model = AlbertForQuestionAnswering.from_pretrained('albert-base-v2')
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
input_text = "[CLS] " + question + " [SEP] " + text + " [SEP]"
input_ids = tokenizer.encode(input_text)
token_type_ids = [0 if i <= input_ids.index(3) else 1 for i in range(len(input_ids))] # for albert [SEP] token has id 3
start_scores, end_scores = model(torch.tensor([input_ids]), token_type_ids=torch.tensor([token_type_ids]))
all_tokens = tokenizer.convert_ids_to_tokens(input_ids)
print(' '.join(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1]))
```
It actually shows empty output because
```
torch.argmax(start_scores), torch.argmax(end_scores)+1
## (tensor(7), tensor(6))
```
For other versions of Albert I also get some nonsense results :(
Thanks in advance | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1979/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/1979/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1978 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1978/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1978/comments | https://api.github.com/repos/huggingface/transformers/issues/1978/events | https://github.com/huggingface/transformers/issues/1978 | 529,876,561 | MDU6SXNzdWU1Mjk4NzY1NjE= | 1,978 | Modify position_embeddings from pre_trained model | {
"login": "duyduc1110",
"id": 22440962,
"node_id": "MDQ6VXNlcjIyNDQwOTYy",
"avatar_url": "https://avatars.githubusercontent.com/u/22440962?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/duyduc1110",
"html_url": "https://github.com/duyduc1110",
"followers_url": "https://api.github.com/users/duyduc1110/followers",
"following_url": "https://api.github.com/users/duyduc1110/following{/other_user}",
"gists_url": "https://api.github.com/users/duyduc1110/gists{/gist_id}",
"starred_url": "https://api.github.com/users/duyduc1110/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/duyduc1110/subscriptions",
"organizations_url": "https://api.github.com/users/duyduc1110/orgs",
"repos_url": "https://api.github.com/users/duyduc1110/repos",
"events_url": "https://api.github.com/users/duyduc1110/events{/privacy}",
"received_events_url": "https://api.github.com/users/duyduc1110/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think you **cannot** change this parameter because doing so you're trying to load weights with (512, 768) shape into an architecture with (1024, 768), and it's not possible.\r\nIf my statement is true (maybe some authors of Transformers can confirm or deny my statement), maybe a way to avoid that end users like you try to change this parameter would be to make this variable private, such as `_max_position_embeddings`.\r\n\r\n> ## Questions & Help\r\n> When I load a model like below:\r\n> `model1 = BertForSequenceClassification.from_pretrained('bert-base-uncased')`\r\n> \r\n> ```\r\n> BertForSequenceClassification(\r\n> (bert): BertModel(\r\n> (embeddings): BertEmbeddings(\r\n> (word_embeddings): Embedding(30522, 768, padding_idx=0)\r\n> (position_embeddings): Embedding(512, 768)\r\n> (token_type_embeddings): Embedding(2, 768)\r\n> (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\r\n> (dropout): Dropout(p=0.1, inplace=False)\r\n> )\r\n> (encoder): BertEncoder(\r\n> (layer): ModuleList(...\r\n> ```\r\n> \r\n> I want to change Embedding size from 512 to 1024, but when I try to add like this and get an error:\r\n> `model = BertForSequenceClassification.from_pretrained('bert-base-uncased', max_position_embeddings=1024)`\r\n> \r\n> > RuntimeError: Error(s) in loading state_dict for BertForSequenceClassification:\r\n> > size mismatch for bert.embeddings.position_embeddings.weight: copying a param with shape torch.Size([512, 768]) from checkpoint, the shape in current model is torch.Size([1024, 768]).\r\n> \r\n> May I know how to change configs of pre-trained model layers?",
"> I think you **cannot** change this parameter because doing so you're trying to load weights with (512, 768) shape into an architecture with (1024, 768), and it's not possible.\r\n> If my statement is true (maybe some authors of Transformers can confirm or deny my statement), maybe a way to avoid that end users like you try to change this parameter would be to make this variable private, such as `_max_position_embeddings`.\r\n> \r\nAs I check with `vars(BertForSequenceClassification.from_pretrained('bert-base-uncased'))`:\r\n```\r\n{'_backend': <torch.nn.backends.thnn.THNNFunctionBackend at 0x1e557269400>,\r\n '_parameters': OrderedDict(),\r\n '_buffers': OrderedDict(),\r\n '_backward_hooks': OrderedDict(),\r\n '_forward_hooks': OrderedDict(),\r\n '_forward_pre_hooks': OrderedDict(),\r\n '_state_dict_hooks': OrderedDict(),\r\n '_load_state_dict_pre_hooks': OrderedDict(),\r\n '_modules': ...\r\n 'training': False,\r\n 'config': {\r\n \"attention_probs_dropout_prob\": 0.1,\r\n \"finetuning_task\": null,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout_prob\": 0.1,\r\n \"hidden_size\": 768,\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 3072,\r\n \"is_decoder\": false,\r\n \"layer_norm_eps\": 1e-12,\r\n \"max_position_embeddings\": 512, <============= This is the one\r\n \"num_attention_heads\": 12,\r\n \"num_hidden_layers\": 12,\r\n \"num_labels\": 2,\r\n \"output_attentions\": false,\r\n \"output_hidden_states\": false,\r\n \"output_past\": true,\r\n \"pruned_heads\": {},\r\n \"torchscript\": false,\r\n \"type_vocab_size\": 2,\r\n \"use_bfloat16\": false,\r\n \"vocab_size\": 30522\r\n },\r\n 'num_labels': 2}\r\n```\r\nSo I decided to replace `config` with `config=BertConfig(max_position_embeddings=1024)`:\r\n```\r\n{\r\n \"attention_probs_dropout_prob\": 0.1,\r\n \"finetuning_task\": null,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout_prob\": 0.1,\r\n \"hidden_size\": 768,\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 3072,\r\n \"is_decoder\": false,\r\n \"layer_norm_eps\": 1e-12,\r\n \"max_position_embeddings\": 1024, <============== It changed\r\n \"num_attention_heads\": 12,\r\n \"num_hidden_layers\": 12,\r\n \"num_labels\": 2,\r\n \"output_attentions\": false,\r\n \"output_hidden_states\": false,\r\n \"output_past\": true,\r\n \"pruned_heads\": {},\r\n \"torchscript\": false,\r\n \"type_vocab_size\": 2,\r\n \"use_bfloat16\": false,\r\n \"vocab_size\": 30522\r\n}\r\n```\r\nBut the same error is occurred when `BertForSequenceClassification.from_pretrained('bert-base-uncased', config=config)`:\r\n```\r\nRuntimeError Traceback (most recent call last)\r\n<ipython-input-9-cfc2c553c1d9> in <module>\r\n 1 config=BertConfig(max_position_embeddings=1024)\r\n----> 2 BertForSequenceClassification.from_pretrained('bert-base-uncased', config=config)\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\transformers\\modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)\r\n 457 if len(error_msgs) > 0:\r\n 458 raise RuntimeError('Error(s) in loading state_dict for {}:\\n\\t{}'.format(\r\n--> 459 model.__class__.__name__, \"\\n\\t\".join(error_msgs)))\r\n 460 \r\n 461 if hasattr(model, 'tie_weights'):\r\n\r\nRuntimeError: Error(s) in loading state_dict for BertForSequenceClassification:\r\n\tsize mismatch for bert.embeddings.position_embeddings.weight: copying a param with shape torch.Size([512, 768]) from checkpoint, the shape in current model is torch.Size([1024, 768]).\r\n```\r\n... 🚶 \r\n",
"Sorry, but it is obvious that it doesn't work. As a said before, BERT was trained with a **particular** architecture (i.e. with 512 as max positional embeddings), and it was saved with this shape. You cannot load weights that doesn't match your architecture!\r\n\r\n> > I think you **cannot** change this parameter because doing so you're trying to load weights with (512, 768) shape into an architecture with (1024, 768), and it's not possible.\r\n> > If my statement is true (maybe some authors of Transformers can confirm or deny my statement), maybe a way to avoid that end users like you try to change this parameter would be to make this variable private, such as `_max_position_embeddings`.\r\n> \r\n> As I check with `vars(BertForSequenceClassification.from_pretrained('bert-base-uncased'))`:\r\n> \r\n> ```\r\n> {'_backend': <torch.nn.backends.thnn.THNNFunctionBackend at 0x1e557269400>,\r\n> '_parameters': OrderedDict(),\r\n> '_buffers': OrderedDict(),\r\n> '_backward_hooks': OrderedDict(),\r\n> '_forward_hooks': OrderedDict(),\r\n> '_forward_pre_hooks': OrderedDict(),\r\n> '_state_dict_hooks': OrderedDict(),\r\n> '_load_state_dict_pre_hooks': OrderedDict(),\r\n> '_modules': ...\r\n> 'training': False,\r\n> 'config': {\r\n> \"attention_probs_dropout_prob\": 0.1,\r\n> \"finetuning_task\": null,\r\n> \"hidden_act\": \"gelu\",\r\n> \"hidden_dropout_prob\": 0.1,\r\n> \"hidden_size\": 768,\r\n> \"initializer_range\": 0.02,\r\n> \"intermediate_size\": 3072,\r\n> \"is_decoder\": false,\r\n> \"layer_norm_eps\": 1e-12,\r\n> \"max_position_embeddings\": 512, <============= This is the one\r\n> \"num_attention_heads\": 12,\r\n> \"num_hidden_layers\": 12,\r\n> \"num_labels\": 2,\r\n> \"output_attentions\": false,\r\n> \"output_hidden_states\": false,\r\n> \"output_past\": true,\r\n> \"pruned_heads\": {},\r\n> \"torchscript\": false,\r\n> \"type_vocab_size\": 2,\r\n> \"use_bfloat16\": false,\r\n> \"vocab_size\": 30522\r\n> },\r\n> 'num_labels': 2}\r\n> ```\r\n> \r\n> So I decided to replace `config` with `config=BertConfig(max_position_embeddings=1024)`:\r\n> \r\n> ```\r\n> {\r\n> \"attention_probs_dropout_prob\": 0.1,\r\n> \"finetuning_task\": null,\r\n> \"hidden_act\": \"gelu\",\r\n> \"hidden_dropout_prob\": 0.1,\r\n> \"hidden_size\": 768,\r\n> \"initializer_range\": 0.02,\r\n> \"intermediate_size\": 3072,\r\n> \"is_decoder\": false,\r\n> \"layer_norm_eps\": 1e-12,\r\n> \"max_position_embeddings\": 1024, <============== It changed\r\n> \"num_attention_heads\": 12,\r\n> \"num_hidden_layers\": 12,\r\n> \"num_labels\": 2,\r\n> \"output_attentions\": false,\r\n> \"output_hidden_states\": false,\r\n> \"output_past\": true,\r\n> \"pruned_heads\": {},\r\n> \"torchscript\": false,\r\n> \"type_vocab_size\": 2,\r\n> \"use_bfloat16\": false,\r\n> \"vocab_size\": 30522\r\n> }\r\n> ```\r\n> \r\n> But the same error is occurred when `BertForSequenceClassification.from_pretrained('bert-base-uncased', config=config)`:\r\n> \r\n> ```\r\n> RuntimeError Traceback (most recent call last)\r\n> <ipython-input-9-cfc2c553c1d9> in <module>\r\n> 1 config=BertConfig(max_position_embeddings=1024)\r\n> ----> 2 BertForSequenceClassification.from_pretrained('bert-base-uncased', config=config)\r\n> \r\n> C:\\ProgramData\\Anaconda3\\lib\\site-packages\\transformers\\modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)\r\n> 457 if len(error_msgs) > 0:\r\n> 458 raise RuntimeError('Error(s) in loading state_dict for {}:\\n\\t{}'.format(\r\n> --> 459 model.__class__.__name__, \"\\n\\t\".join(error_msgs)))\r\n> 460 \r\n> 461 if hasattr(model, 'tie_weights'):\r\n> \r\n> RuntimeError: Error(s) in loading state_dict for BertForSequenceClassification:\r\n> \tsize mismatch for bert.embeddings.position_embeddings.weight: copying a param with shape torch.Size([512, 768]) from checkpoint, the shape in current model is torch.Size([1024, 768]).\r\n> ```\r\n> \r\n> ..."
] | 1,574 | 1,575 | 1,575 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
When I load a model like below:
`model1 = BertForSequenceClassification.from_pretrained('bert-base-uncased')`
```
BertForSequenceClassification(
(bert): BertModel(
(embeddings): BertEmbeddings(
(word_embeddings): Embedding(30522, 768, padding_idx=0)
(position_embeddings): Embedding(512, 768)
(token_type_embeddings): Embedding(2, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(encoder): BertEncoder(
(layer): ModuleList(...
```
I want to change Embedding size from 512 to 1024, but when I try to add like this and get an error:
`model = BertForSequenceClassification.from_pretrained('bert-base-uncased', max_position_embeddings=1024)`
> RuntimeError: Error(s) in loading state_dict for BertForSequenceClassification:
size mismatch for bert.embeddings.position_embeddings.weight: copying a param with shape torch.Size([512, 768]) from checkpoint, the shape in current model is torch.Size([1024, 768]).
May I know how to change configs of pre-trained model layers? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1978/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1978/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1977 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1977/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1977/comments | https://api.github.com/repos/huggingface/transformers/issues/1977/events | https://github.com/huggingface/transformers/issues/1977 | 529,734,085 | MDU6SXNzdWU1Mjk3MzQwODU= | 1,977 | 'convert_tf_checkpoint_to_pytorch.py' file is missing | {
"login": "imayachita",
"id": 3615586,
"node_id": "MDQ6VXNlcjM2MTU1ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/3615586?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/imayachita",
"html_url": "https://github.com/imayachita",
"followers_url": "https://api.github.com/users/imayachita/followers",
"following_url": "https://api.github.com/users/imayachita/following{/other_user}",
"gists_url": "https://api.github.com/users/imayachita/gists{/gist_id}",
"starred_url": "https://api.github.com/users/imayachita/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/imayachita/subscriptions",
"organizations_url": "https://api.github.com/users/imayachita/orgs",
"repos_url": "https://api.github.com/users/imayachita/repos",
"events_url": "https://api.github.com/users/imayachita/events{/privacy}",
"received_events_url": "https://api.github.com/users/imayachita/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_PyTorch-pretrained-BERT_ is a older name of this library; now its name is **Transformers**.\r\nYou can check the latest docs of the library and install it from PyPi with `pip install transformers` (you have to install manually TensorFlow 2.0 and PyTorch as well through `pip install tensorflow==2.0.0` and `pip install torch`).\r\n\r\nSaid this, you can read [this](https://github.com/huggingface/transformers/blob/17ea43cf985829634bd86b36b44e5410c6f83e36/transformers/convert_bert_original_tf_checkpoint_to_pytorch.py) Python script whose goal is to convert BERT original TensorFlow checkpoint to PyTorch. The input of this script are the following three:\r\n- the path to the TensorFlow checkpoint, through `--tf_checkpoint_path` parameter\r\n- a JSON file which specifies the model architecture through `--bert_config_file` parameter\r\n- the path to the output converted PyTorch model through `--pytorch_dump_path` parameter\r\n\r\n> Hi all,\r\n> I pre-trained BERT base model on my domain-specific corpus using `https://github.com/google-research/bert` `create_pretraining_data.py` and `run_pretraining.py`.\r\n> Now, I want to use it with this `pytorch-transformers`. I saw from this page https://devhub.io/repos/huggingface-pytorch-pretrained-BERT that there is conversion script from tf checkpoints to `pytorch_model.bin` called `convert_tf_checkpoint_to_pytorch.py` but the file no longer exists.\r\n> Does anyone have solution? Thanks!",
"Thanks @TheEdoardo93!"
] | 1,574 | 1,575 | 1,575 | NONE | null | Hi all,
I pre-trained BERT base model on my domain-specific corpus using ```https://github.com/google-research/bert``` ```create_pretraining_data.py``` and ```run_pretraining.py```.
Now, I want to use it with this ```pytorch-transformers```. I saw from this page https://devhub.io/repos/huggingface-pytorch-pretrained-BERT that there is conversion script from tf checkpoints to ```pytorch_model.bin``` called ```convert_tf_checkpoint_to_pytorch.py``` but the file no longer exists.
Does anyone have solution? Thanks!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1977/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1977/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1976 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1976/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1976/comments | https://api.github.com/repos/huggingface/transformers/issues/1976/events | https://github.com/huggingface/transformers/pull/1976 | 529,655,087 | MDExOlB1bGxSZXF1ZXN0MzQ2NDkxNjE5 | 1,976 | Merge pull request #1 from huggingface/master | {
"login": "ciel-zhang",
"id": 18700473,
"node_id": "MDQ6VXNlcjE4NzAwNDcz",
"avatar_url": "https://avatars.githubusercontent.com/u/18700473?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ciel-zhang",
"html_url": "https://github.com/ciel-zhang",
"followers_url": "https://api.github.com/users/ciel-zhang/followers",
"following_url": "https://api.github.com/users/ciel-zhang/following{/other_user}",
"gists_url": "https://api.github.com/users/ciel-zhang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ciel-zhang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ciel-zhang/subscriptions",
"organizations_url": "https://api.github.com/users/ciel-zhang/orgs",
"repos_url": "https://api.github.com/users/ciel-zhang/repos",
"events_url": "https://api.github.com/users/ciel-zhang/events{/privacy}",
"received_events_url": "https://api.github.com/users/ciel-zhang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1976?src=pr&el=h1) Report\n> Merging [#1976](https://codecov.io/gh/huggingface/transformers/pull/1976?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/96e7ee72380a135bfd07b8fdc2018bcbea65b086?src=pr&el=desc) will **increase** coverage by `0.2%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1976?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1976 +/- ##\n=========================================\n+ Coverage 84.06% 84.26% +0.2% \n=========================================\n Files 105 104 -1 \n Lines 15537 15431 -106 \n=========================================\n- Hits 13061 13003 -58 \n+ Misses 2476 2428 -48\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1976?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1976/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fdXRpbHMucHk=) | `92.1% <0%> (-0.11%)` | :arrow_down: |\n| [transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1976/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3V0aWxzLnB5) | `92.68% <0%> (-0.05%)` | :arrow_down: |\n| [transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1976/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3V0aWxzLnB5) | `89.43% <0%> (-0.03%)` | :arrow_down: |\n| [transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/1976/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2N0cmwucHk=) | `96.46% <0%> (-0.02%)` | :arrow_down: |\n| [transformers/configuration\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/1976/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fZGlzdGlsYmVydC5weQ==) | `89.74% <0%> (ø)` | :arrow_up: |\n| [transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/1976/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fYXV0by5weQ==) | `59.45% <0%> (ø)` | :arrow_up: |\n| [transformers/tests/tokenization\\_tests\\_commons.py](https://codecov.io/gh/huggingface/transformers/pull/1976/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3Rva2VuaXphdGlvbl90ZXN0c19jb21tb25zLnB5) | `100% <0%> (ø)` | :arrow_up: |\n| [transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/1976/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2Rpc3RpbGJlcnQucHk=) | `95.87% <0%> (ø)` | :arrow_up: |\n| [transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/1976/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2F1dG8ucHk=) | `31.81% <0%> (ø)` | :arrow_up: |\n| [transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/1976/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl9hdXRvLnB5) | `45.94% <0%> (ø)` | :arrow_up: |\n| ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/1976/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1976?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1976?src=pr&el=footer). Last update [96e7ee7...5a3f240](https://codecov.io/gh/huggingface/transformers/pull/1976?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,574 | 1,574 | 1,574 | NONE | null | merge | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1976/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1976/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1976",
"html_url": "https://github.com/huggingface/transformers/pull/1976",
"diff_url": "https://github.com/huggingface/transformers/pull/1976.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1976.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/1975 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1975/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1975/comments | https://api.github.com/repos/huggingface/transformers/issues/1975/events | https://github.com/huggingface/transformers/issues/1975 | 529,612,630 | MDU6SXNzdWU1Mjk2MTI2MzA= | 1,975 | How can we view different versions of documentation? | {
"login": "drydenb",
"id": 9606974,
"node_id": "MDQ6VXNlcjk2MDY5NzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9606974?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/drydenb",
"html_url": "https://github.com/drydenb",
"followers_url": "https://api.github.com/users/drydenb/followers",
"following_url": "https://api.github.com/users/drydenb/following{/other_user}",
"gists_url": "https://api.github.com/users/drydenb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/drydenb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/drydenb/subscriptions",
"organizations_url": "https://api.github.com/users/drydenb/orgs",
"repos_url": "https://api.github.com/users/drydenb/repos",
"events_url": "https://api.github.com/users/drydenb/events{/privacy}",
"received_events_url": "https://api.github.com/users/drydenb/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"We're in the process of building better versioned documentation with easier links to follow, but at the moment the different versions are accessible in the [README](https://github.com/huggingface/transformers#state-of-the-art-natural-language-processing-for-tensorflow-20-and-pytorch), right before the `installation` section.",
"Great, thank you!"
] | 1,574 | 1,575 | 1,575 | NONE | null | ## ❓ Questions & Help
How can I select a specific version of transformers in the documentation located here: https://huggingface.co/transformers/index.html?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1975/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1975/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1974 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1974/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1974/comments | https://api.github.com/repos/huggingface/transformers/issues/1974/events | https://github.com/huggingface/transformers/issues/1974 | 529,591,822 | MDU6SXNzdWU1Mjk1OTE4MjI= | 1,974 | Albert Hyperparameters for Fine-tuning SQuAD 2.0 | {
"login": "ahotrod",
"id": 44321615,
"node_id": "MDQ6VXNlcjQ0MzIxNjE1",
"avatar_url": "https://avatars.githubusercontent.com/u/44321615?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahotrod",
"html_url": "https://github.com/ahotrod",
"followers_url": "https://api.github.com/users/ahotrod/followers",
"following_url": "https://api.github.com/users/ahotrod/following{/other_user}",
"gists_url": "https://api.github.com/users/ahotrod/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahotrod/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahotrod/subscriptions",
"organizations_url": "https://api.github.com/users/ahotrod/orgs",
"repos_url": "https://api.github.com/users/ahotrod/repos",
"events_url": "https://api.github.com/users/ahotrod/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahotrod/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Wondering this as well but for GLUE tasks. There don't seem to be a good consensus on hyperparameters such as weight decay and such",
"Results using hyperparameters from my first post above, varying only batch size:\r\n```\r\nalbert_xxlargev1_squad2_512_bs32:\r\n{\r\n \"exact\": 83.67725090541565,\r\n \"f1\": 87.51235434089064,\r\n \"total\": 11873,\r\n \"HasAns_exact\": 81.86572199730094,\r\n \"HasAns_f1\": 89.54692697189559,\r\n \"HasAns_total\": 5928,\r\n \"NoAns_exact\": 85.48359966358284,\r\n \"NoAns_f1\": 85.48359966358284,\r\n \"NoAns_total\": 5945\r\n}\r\n\r\nalbert_xxlargev1_squad2_512_bs48:\r\n{\r\n \"exact\": 83.65198349195654,\r\n \"f1\": 87.4736247587816,\r\n \"total\": 11873,\r\n \"HasAns_exact\": 81.73076923076923,\r\n \"HasAns_f1\": 89.38501126197984,\r\n \"HasAns_total\": 5928,\r\n \"NoAns_exact\": 85.5677039529016,\r\n \"NoAns_f1\": 85.5677039529016,\r\n \"NoAns_total\": 5945\r\n}\r\n```\r\n\r\n\r\n",
"@ahotrod There is a table in the appendix section of the ALBERT paper, which shows hyperparameters for ALBERT in downstream tasks:\r\n\r\n ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,574 | 1,582 | 1,582 | CONTRIBUTOR | null | ## ❓ Questions & Help
I want to fine-tune `albert-xxlarge-v1` on SQuAD 2.0 and am in need of optimal hyperparameters. I did not find any discussion in the Albert original paper regarding suggested fine-tuning hyperparameters, as is provided in the XLNet original paper. I did find the following hard-coded parameters in the Google-research Albert `run_squad_sp.py` code:
```
'do_lower_case' = True
'train_batch_size' = 32
'predict_batch_size' = 8
'learning_rate' = 5e-5
'num_train_epochs' = 3.0
'warmup_proportion' = 0.1
```
With fine-tuning on my 2x GPUs taking ~69 hours, I'd like to shrink the number of fine-tuning iterations necessary to attain optimal model performance. Anyone have a bead on the optimal hyperparameters?
Also, Google-research comments in `run_squad_sp.py` state that `warmup_proportion` is "Proportion of training to perform linear learning rate warmup for." "E.g., 0.1 = 10% of training". Since 3 epochs, batch size = 32 while fine-tuning SQuAD 2.0 results in approximately 12.5K total optimization steps, would I set `--warmup_steps = 1250` when calling Transformers' run_squad.py?
Thanks in advance for any input. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1974/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1974/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1973 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1973/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1973/comments | https://api.github.com/repos/huggingface/transformers/issues/1973/events | https://github.com/huggingface/transformers/issues/1973 | 529,583,216 | MDU6SXNzdWU1Mjk1ODMyMTY= | 1,973 | Changes to S3 Roberta / RobertaForSequenceClassification | {
"login": "frankfka",
"id": 31530056,
"node_id": "MDQ6VXNlcjMxNTMwMDU2",
"avatar_url": "https://avatars.githubusercontent.com/u/31530056?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/frankfka",
"html_url": "https://github.com/frankfka",
"followers_url": "https://api.github.com/users/frankfka/followers",
"following_url": "https://api.github.com/users/frankfka/following{/other_user}",
"gists_url": "https://api.github.com/users/frankfka/gists{/gist_id}",
"starred_url": "https://api.github.com/users/frankfka/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/frankfka/subscriptions",
"organizations_url": "https://api.github.com/users/frankfka/orgs",
"repos_url": "https://api.github.com/users/frankfka/repos",
"events_url": "https://api.github.com/users/frankfka/events{/privacy}",
"received_events_url": "https://api.github.com/users/frankfka/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,574 | 1,580 | 1,580 | NONE | null | Hello,
I'm wondering if there have been any changes in either the pretrained Roberta model or the configuration of RobertaForSequenceClassification within the past month or so. I am initializing it as `RobertaForSequenceClassification.from_pretrained(...)` and running it as demonstrated in `run_glue.py`.
For a custom dataset, I am noticing that the fine-tune accuracy has decreased by several percentage points on newly trained models as opposed to ~ 1 month ago. This happens consistently (i.e I've tried retraining multiple times) and using the same hyperparams.
In addition, I've noticed that the new training times are cut by ~1/2, so the model seems to train faster, but is less performant. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1973/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/1973/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1972 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1972/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1972/comments | https://api.github.com/repos/huggingface/transformers/issues/1972/events | https://github.com/huggingface/transformers/issues/1972 | 529,502,784 | MDU6SXNzdWU1Mjk1MDI3ODQ= | 1,972 | How to persist cloud-based transformers | {
"login": "jmwoloso",
"id": 7530947,
"node_id": "MDQ6VXNlcjc1MzA5NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7530947?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmwoloso",
"html_url": "https://github.com/jmwoloso",
"followers_url": "https://api.github.com/users/jmwoloso/followers",
"following_url": "https://api.github.com/users/jmwoloso/following{/other_user}",
"gists_url": "https://api.github.com/users/jmwoloso/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmwoloso/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmwoloso/subscriptions",
"organizations_url": "https://api.github.com/users/jmwoloso/orgs",
"repos_url": "https://api.github.com/users/jmwoloso/repos",
"events_url": "https://api.github.com/users/jmwoloso/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmwoloso/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Are you trying to serialize which type of model to .hdf5 file? A pre-trained model such as BertFor* or a custom model trained with Transformers? Are you using GCP or Azure or AWS?\r\n\r\n> ## Questions & Help\r\n> I'm using this repo in the cloud and attempting to persist the model fails as it seems HDF5 and thus h5py doesn't support that paradigm per [h5py/h5py#925](https://github.com/h5py/h5py/issues/925)\r\n> \r\n> What is the recommended method of saving the model in this scenario? Thanks!",
"Hi @TheEdoardo93. I'm trying to serialize TFBertFor* that has been fine-tuned. I'm on Azure.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,574 | 1,580 | 1,580 | CONTRIBUTOR | null | ## ❓ Questions & Help
I'm using this repo in the cloud and attempting to persist the model fails as it seems HDF5 and thus h5py doesn't support that paradigm per https://github.com/h5py/h5py/issues/925
What is the recommended method of saving the model in this scenario? Thanks!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1972/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1972/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1971 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1971/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1971/comments | https://api.github.com/repos/huggingface/transformers/issues/1971/events | https://github.com/huggingface/transformers/pull/1971 | 529,466,216 | MDExOlB1bGxSZXF1ZXN0MzQ2MzM4NDc5 | 1,971 | add add_special_tokens=True for input examples | {
"login": "yaolu",
"id": 8982361,
"node_id": "MDQ6VXNlcjg5ODIzNjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8982361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yaolu",
"html_url": "https://github.com/yaolu",
"followers_url": "https://api.github.com/users/yaolu/followers",
"following_url": "https://api.github.com/users/yaolu/following{/other_user}",
"gists_url": "https://api.github.com/users/yaolu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yaolu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yaolu/subscriptions",
"organizations_url": "https://api.github.com/users/yaolu/orgs",
"repos_url": "https://api.github.com/users/yaolu/repos",
"events_url": "https://api.github.com/users/yaolu/events{/privacy}",
"received_events_url": "https://api.github.com/users/yaolu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1971?src=pr&el=h1) Report\n> Merging [#1971](https://codecov.io/gh/huggingface/transformers/pull/1971?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5afca00b4732f57329824e1538897e791e02e894?src=pr&el=desc) will **increase** coverage by `0.18%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1971?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1971 +/- ##\n==========================================\n+ Coverage 84.06% 84.24% +0.18% \n==========================================\n Files 105 104 -1 \n Lines 15536 15431 -105 \n==========================================\n- Hits 13060 13000 -60 \n+ Misses 2476 2431 -45\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1971?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/1971/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2JlcnQucHk=) | `87.77% <ø> (ø)` | :arrow_up: |\n| [transformers/tests/modeling\\_tf\\_common\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1971/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2NvbW1vbl90ZXN0LnB5) | `95.63% <0%> (-1.46%)` | :arrow_down: |\n| [transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1971/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fdXRpbHMucHk=) | `92.1% <0%> (-0.11%)` | :arrow_down: |\n| [transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1971/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3V0aWxzLnB5) | `92.68% <0%> (-0.05%)` | :arrow_down: |\n| [transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1971/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3V0aWxzLnB5) | `89.43% <0%> (-0.03%)` | :arrow_down: |\n| [transformers/configuration\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/1971/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fZGlzdGlsYmVydC5weQ==) | `89.74% <0%> (ø)` | :arrow_up: |\n| [transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/1971/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fYXV0by5weQ==) | `59.45% <0%> (ø)` | :arrow_up: |\n| [transformers/tests/tokenization\\_tests\\_commons.py](https://codecov.io/gh/huggingface/transformers/pull/1971/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3Rva2VuaXphdGlvbl90ZXN0c19jb21tb25zLnB5) | `100% <0%> (ø)` | :arrow_up: |\n| [transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/1971/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2Rpc3RpbGJlcnQucHk=) | `95.87% <0%> (ø)` | :arrow_up: |\n| [transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/1971/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2F1dG8ucHk=) | `31.81% <0%> (ø)` | :arrow_up: |\n| ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/1971/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1971?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1971?src=pr&el=footer). Last update [5afca00...d5dd44e](https://codecov.io/gh/huggingface/transformers/pull/1971?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This is less ambiguous, indeed! Thank you for taking the time to open a PR."
] | 1,574 | 1,574 | 1,574 | CONTRIBUTOR | null | According to #1957 , in some versions of transformers, add_special_tokens is not set to True by default. In that case, the example code is wrong as input_ids will missing [CLS] and [SEP] tokens. It's better to pass add_special_tokens=True in the example explicitly. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1971/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1971/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1971",
"html_url": "https://github.com/huggingface/transformers/pull/1971",
"diff_url": "https://github.com/huggingface/transformers/pull/1971.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1971.patch",
"merged_at": 1574874324000
} |
https://api.github.com/repos/huggingface/transformers/issues/1970 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1970/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1970/comments | https://api.github.com/repos/huggingface/transformers/issues/1970/events | https://github.com/huggingface/transformers/issues/1970 | 529,445,889 | MDU6SXNzdWU1Mjk0NDU4ODk= | 1,970 | Bert Tensor Dimensions | {
"login": "halidziya",
"id": 9038065,
"node_id": "MDQ6VXNlcjkwMzgwNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/9038065?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/halidziya",
"html_url": "https://github.com/halidziya",
"followers_url": "https://api.github.com/users/halidziya/followers",
"following_url": "https://api.github.com/users/halidziya/following{/other_user}",
"gists_url": "https://api.github.com/users/halidziya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/halidziya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/halidziya/subscriptions",
"organizations_url": "https://api.github.com/users/halidziya/orgs",
"repos_url": "https://api.github.com/users/halidziya/repos",
"events_url": "https://api.github.com/users/halidziya/events{/privacy}",
"received_events_url": "https://api.github.com/users/halidziya/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi, thanks for letting us know. Is there any way you could provide a script that reproduces the error in a few lines so that we may see what is wrong on our end?",
"model.predict function fails with output_hidden_states=True in constructor",
"I'm failing to reproduce what you're mentioning with the following snippet:\r\n\r\n```py\r\nfrom transformers import TFBertModel, BertTokenizer, BertConfig\r\nimport tensorflow as tf\r\n\r\nconfig = BertConfig.from_pretrained(\"bert-base-cased\", output_hidden_states=True)\r\nmodel = TFBertModel.from_pretrained(\"bert-base-cased\", config=config)\r\n\r\ntok = BertTokenizer.from_pretrained(\"bert-base-cased\")\r\ntext = tok.encode(\"Ain't this [MASK] best thing you've ever seen?\")\r\n\r\ninputs = tf.constant(text)\r\noutputs = model.predict(inputs)\r\n\r\nprint(outputs)\r\n```\r\nIs there any way you could provide a script that reproduces the error in a few lines so that we may see what is wrong on our end?\r\n",
"With this piece of code you've posted, I'm encountered the same problem highlighted by @halidziya .\r\n\r\n**ENVIRONMENT**:\r\n- Python 3.6.9\r\n- OS: Ubuntu 16.04 ('Linux-4.15.0-70-generic-x86_64-with-debian-buster-sid')\r\n- Transformers: 2.2.2 (installed with `pip install transformers` the day after the release)\r\n- PyTorch: 1.3.1\r\n- TensorFlow: 2.0.0\r\n\r\nThe stack trace is reported below:\r\n```\r\nPython 3.6.9 |Anaconda, Inc.| (default, Jul 30 2019, 19:07:31) \r\n[GCC 7.3.0] on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> from transformers import TFBertModel, BertTokenizer, BertConfig\r\n/home/vidiemme/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\r\n _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)])\r\n/home/vidiemme/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\r\n _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)])\r\n/home/vidiemme/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\r\n _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)])\r\n/home/vidiemme/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\r\n _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)])\r\n/home/vidiemme/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\r\n _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)])\r\n/home/vidiemme/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\r\n np_resource = np.dtype([(\"resource\", np.ubyte, 1)])\r\n2019-12-03 09:46:35.606174: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA\r\n2019-12-03 09:46:35.610775: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3408000000 Hz\r\n2019-12-03 09:46:35.611320: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55fa0a097860 executing computations on platform Host. Devices:\r\n2019-12-03 09:46:35.611341: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Host, Default Version\r\n>>> import tensorflow as tf\r\n>>> config = BertConfig.from_pretrained(\"bert-base-cased\", output_hidden_states=True)\r\n>>> model = TFBertModel.from_pretrained(\"bert-base-cased\", config=config)\r\n>>> tok = BertTokenizer.from_pretrained(\"bert-base-cased\")\r\n>>> text = tok.encode(\"Ain't this [MASK] best thing you've ever seen?\")\r\n>>> inputs = tf.constant(text)\r\n>>> outputs = model.predict(inputs)\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/vidiemme/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py\", line 909, in predict\r\n use_multiprocessing=use_multiprocessing)\r\n File \"/home/vidiemme/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_arrays.py\", line 715, in predict\r\n x, check_steps=True, steps_name='steps', steps=steps)\r\n File \"/home/vidiemme/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py\", line 2419, in _standardize_user_data\r\n all_inputs, y_input, dict_inputs = self._build_model_with_inputs(x, y)\r\n File \"/home/vidiemme/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py\", line 2622, in _build_model_with_inputs\r\n self._set_inputs(cast_inputs)\r\n File \"/home/vidiemme/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py\", line 2709, in _set_inputs\r\n outputs = self(inputs, **kwargs)\r\n File \"/home/vidiemme/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/base_layer.py\", line 842, in __call__\r\n outputs = call_fn(cast_inputs, *args, **kwargs)\r\n File \"/home/vidiemme/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorflow_core/python/autograph/impl/api.py\", line 237, in wrapper\r\n raise e.ag_error_metadata.to_exception(e)\r\nValueError: in converted code:\r\n relative to /home/vidiemme/anaconda3/envs/huggingface/lib/python3.6/site-packages:\r\n\r\n transformers/modeling_tf_bert.py:684 call *\r\n outputs = self.bert(inputs, **kwargs)\r\n tensorflow_core/python/keras/engine/base_layer.py:842 __call__\r\n outputs = call_fn(cast_inputs, *args, **kwargs)\r\n transformers/modeling_tf_bert.py:512 call *\r\n attention_mask = tf.fill(input_shape, 1)\r\n tensorflow_core/python/ops/array_ops.py:171 fill\r\n result = gen_array_ops.fill(dims, value, name=name)\r\n tensorflow_core/python/ops/gen_array_ops.py:3602 fill\r\n \"Fill\", dims=dims, value=value, name=name)\r\n tensorflow_core/python/framework/op_def_library.py:545 _apply_op_helper\r\n (input_name, err))\r\n\r\n ValueError: Tried to convert 'dims' to a tensor and failed. Error: Cannot convert a partially known TensorShape to a Tensor: (None, 1)\r\n```\r\n\r\n> I'm failing to reproduce what you're mentioning with the following snippet:\r\n> \r\n> ```python\r\n> from transformers import TFBertModel, BertTokenizer, BertConfig\r\n> import tensorflow as tf\r\n> \r\n> config = BertConfig.from_pretrained(\"bert-base-cased\", output_hidden_states=True)\r\n> model = TFBertModel.from_pretrained(\"bert-base-cased\", config=config)\r\n> \r\n> tok = BertTokenizer.from_pretrained(\"bert-base-cased\")\r\n> text = tok.encode(\"Ain't this [MASK] best thing you've ever seen?\")\r\n> \r\n> inputs = tf.constant(text)\r\n> outputs = model.predict(inputs)\r\n> \r\n> print(outputs)\r\n> ```\r\n> \r\n> Is there any way you could provide a script that reproduces the error in a few lines so that we may see what is wrong on our end?",
"Cannot reproduce on release 2.2.1.\r\nCan you check with the latest release or master?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,574 | 1,581 | 1,581 | NONE | null | ## 🐛 Bug
My code was working on 2.1 it throws an error in 2.2:
transformers\modeling_tf_bert.py:777 call
outputs = self.bert(inputs, **kwargs)
transformers\modeling_tf_bert.py:512 call
attention_mask = tf.fill(input_shape, 1)
tensorflow_core\python\ops\array_ops.py:171 fill
result = gen_array_ops.fill(dims, value, name=name)
tensorflow_core\python\ops\gen_array_ops.py:3602 fill
"Fill", dims=dims, value=value, name=name)
tensorflow_core\python\framework\op_def_library.py:545 _apply_op_helper
(input_name, err))
ValueError: Tried to convert 'dims' to a tensor and failed. Error: Cannot convert a partially known TensorShape to a Tensor: (None, 72)
<!-- Important information -->
Model I am using (Bert, XLNet....): BERT
Language I am using the model on (English, Chinese....): English
The problem arise when using:
* [ ] the official example scripts: (give details)
* [X] my own modified scripts: predict function
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details)
## Environment
* OS:
* Python version: 3.7.4
* PyTorch version: -
* Tensorflow : 2.0.0
* Using GPU : Yes
* Distributed of parallel setup : No
* Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1970/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1970/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1969 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1969/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1969/comments | https://api.github.com/repos/huggingface/transformers/issues/1969/events | https://github.com/huggingface/transformers/pull/1969 | 529,445,528 | MDExOlB1bGxSZXF1ZXN0MzQ2MzIxNDY5 | 1,969 | Implemented concurrent encoding and converting of sequences for data binarization | {
"login": "sgraaf",
"id": 8904453,
"node_id": "MDQ6VXNlcjg5MDQ0NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8904453?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgraaf",
"html_url": "https://github.com/sgraaf",
"followers_url": "https://api.github.com/users/sgraaf/followers",
"following_url": "https://api.github.com/users/sgraaf/following{/other_user}",
"gists_url": "https://api.github.com/users/sgraaf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgraaf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgraaf/subscriptions",
"organizations_url": "https://api.github.com/users/sgraaf/orgs",
"repos_url": "https://api.github.com/users/sgraaf/repos",
"events_url": "https://api.github.com/users/sgraaf/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgraaf/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1969?src=pr&el=h1) Report\n> Merging [#1969](https://codecov.io/gh/huggingface/transformers/pull/1969?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/49108288ba6e6dcfe554d1af98699ae7a1e6f39c?src=pr&el=desc) will **increase** coverage by `0.29%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1969?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1969 +/- ##\n==========================================\n+ Coverage 83.97% 84.26% +0.29% \n==========================================\n Files 105 104 -1 \n Lines 15529 15431 -98 \n==========================================\n- Hits 13040 13003 -37 \n+ Misses 2489 2428 -61\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1969?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1969/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fdXRpbHMucHk=) | `92.1% <0%> (-0.11%)` | :arrow_down: |\n| [transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1969/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl91dGlscy5weQ==) | `92.14% <0%> (-0.09%)` | :arrow_down: |\n| [transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1969/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3V0aWxzLnB5) | `92.68% <0%> (-0.05%)` | :arrow_down: |\n| [transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1969/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3V0aWxzLnB5) | `89.43% <0%> (-0.03%)` | :arrow_down: |\n| [transformers/configuration\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/1969/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fZGlzdGlsYmVydC5weQ==) | `89.74% <0%> (ø)` | :arrow_up: |\n| [transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/1969/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fYXV0by5weQ==) | `59.45% <0%> (ø)` | :arrow_up: |\n| [transformers/tests/tokenization\\_tests\\_commons.py](https://codecov.io/gh/huggingface/transformers/pull/1969/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3Rva2VuaXphdGlvbl90ZXN0c19jb21tb25zLnB5) | `100% <0%> (ø)` | :arrow_up: |\n| [transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/1969/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2Rpc3RpbGJlcnQucHk=) | `95.87% <0%> (ø)` | :arrow_up: |\n| [transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/1969/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2F1dG8ucHk=) | `31.81% <0%> (ø)` | :arrow_up: |\n| [transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/1969/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl9hdXRvLnB5) | `45.94% <0%> (ø)` | :arrow_up: |\n| ... and [20 more](https://codecov.io/gh/huggingface/transformers/pull/1969/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1969?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1969?src=pr&el=footer). Last update [4910828...265dbe8](https://codecov.io/gh/huggingface/transformers/pull/1969?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,574 | 1,583 | 1,583 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1969/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1969/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1969",
"html_url": "https://github.com/huggingface/transformers/pull/1969",
"diff_url": "https://github.com/huggingface/transformers/pull/1969.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1969.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/1968 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1968/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1968/comments | https://api.github.com/repos/huggingface/transformers/issues/1968/events | https://github.com/huggingface/transformers/issues/1968 | 529,387,673 | MDU6SXNzdWU1MjkzODc2NzM= | 1,968 | AlbertPreTrainedModel class is not available in release v2.2.0 | {
"login": "bugface",
"id": 16659741,
"node_id": "MDQ6VXNlcjE2NjU5NzQx",
"avatar_url": "https://avatars.githubusercontent.com/u/16659741?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bugface",
"html_url": "https://github.com/bugface",
"followers_url": "https://api.github.com/users/bugface/followers",
"following_url": "https://api.github.com/users/bugface/following{/other_user}",
"gists_url": "https://api.github.com/users/bugface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bugface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bugface/subscriptions",
"organizations_url": "https://api.github.com/users/bugface/orgs",
"repos_url": "https://api.github.com/users/bugface/repos",
"events_url": "https://api.github.com/users/bugface/events{/privacy}",
"received_events_url": "https://api.github.com/users/bugface/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, indeed there was a mistake with `TFBertForPretraining` referenced under the ALBERT documentation. This was fixed with 3616209.\r\n\r\nThe `PreTrainedModel` class is not available for ALBERT, which is the same for CTRL, GPT, GPT-2, DistilBERT, CamemBERT, TransformerXL, XLM and XLNet. \r\n\r\nWhy is that class useful for your use-case, seeing as it's a simple wrapper over `PreTrainedModel` with a few overridden attributes?\r\n",
"> Hi, indeed there was a mistake with `TFBertForPretraining` referenced under the ALBERT documentation. This was fixed with [3616209](https://github.com/huggingface/transformers/commit/361620954acf16b27727d763a591257b03f90b5d).\r\n> \r\n> The `PreTrainedModel` class is not available for ALBERT, which is the same for CTRL, GPT, GPT-2, DistilBERT, CamemBERT, TransformerXL, XLM and XLNet.\r\n> \r\n> Why is that class useful for your use-case, seeing as it's a simple wrapper over `PreTrainedModel` with a few overridden attributes?\r\n\r\nI completely agree we can use PreTrainedModel or BertPreTrainedModel instead.\r\n \r\nThe question is that I do see the implementation of the class AlbertPreTrainedModel(PreTrainedModel) in the source code (transformers/modeling_albert.py line 313) but I cannot import it. It seems that it is not included in the released version. I just feel it is weird.\r\n",
"Yes, it is not importable as it is an internal used by different models, but I fail to see a use-case where it would be useful for the library users.\r\n\r\nWhy is that class useful for your use-case, seeing as it's a simple wrapper over `PreTrainedModel` with a few overridden attributes?",
"I think we could add all the `XXXPretrainedModel` in `__init__` indeed. Would make it easier for people to build custom-made models that can load pretrained checkpoints as well.",
"Fixed on master"
] | 1,574 | 1,575 | 1,575 | CONTRIBUTOR | null | ## ❓ Questions & Help
In the release v2.2.0, the AlbertForSequenceClassification class inherits the AlbertPreTrainedModel class as "class AlbertForSequenceClassification(AlbertPreTrainedModel)".
However, in the doc, this pre-trained model is not documented. And in the released v2.2.0 transformer, the AlbertPreTrainedModel class is not available to be imported.
This is not a big issue since we can use BertPreTrainedModel class instead (like RoBERTa). But it should be consistent. Especially in the doc under Albert, there is a class called TFBertForPretraining which makes confusion for users.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1968/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1968/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1967 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1967/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1967/comments | https://api.github.com/repos/huggingface/transformers/issues/1967/events | https://github.com/huggingface/transformers/issues/1967 | 529,346,868 | MDU6SXNzdWU1MjkzNDY4Njg= | 1,967 | Trouble running 'bert-base-multilingual-cased' | {
"login": "jungwhank",
"id": 53588015,
"node_id": "MDQ6VXNlcjUzNTg4MDE1",
"avatar_url": "https://avatars.githubusercontent.com/u/53588015?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jungwhank",
"html_url": "https://github.com/jungwhank",
"followers_url": "https://api.github.com/users/jungwhank/followers",
"following_url": "https://api.github.com/users/jungwhank/following{/other_user}",
"gists_url": "https://api.github.com/users/jungwhank/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jungwhank/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jungwhank/subscriptions",
"organizations_url": "https://api.github.com/users/jungwhank/orgs",
"repos_url": "https://api.github.com/users/jungwhank/repos",
"events_url": "https://api.github.com/users/jungwhank/events{/privacy}",
"received_events_url": "https://api.github.com/users/jungwhank/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I think the error was because text is greater than 512 tokens.\r\nI got no error when text is smaller than 512 tokens.",
"Hi! Indeed the models have a maximum input size, which is 512 for BERT. You should have received a warning when tokenizing your sequence, but unfortunately, there isn't much more we can do to clarify this error further.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,574 | 1,580 | 1,580 | CONTRIBUTOR | null | ## ❓ Questions & Help
Hi,
I got trouble when I apply Quickstart BERT example to korean news text.
but when I run this code using model 'bert-base-multilingual-cased',
```{.python}
# Predict hidden states features for each layer
with torch.no_grad():
# See the models docstrings for the detail of the inputs
outputs = model(tokens_tensor, token_type_ids=segments_tensors)
# Transformers models always output tuples.
# See the models docstrings for the detail of all the outputs
# In our case, the first element is the hidden state of the last layer of the Bert model
encoded_layers = outputs[0]
# We have encoded our input sequence in a FloatTensor of shape (batch size, sequence length, model hidden dimension)
assert tuple(encoded_layers.shape) == (1, len(indexed_tokens), model.config.hidden_size)
```
I sometimes got error like below,
```{.python}
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-39-b8e77f8ffc14> in <module>
2 with torch.no_grad():
3 # See the models docstrings for the detail of the inputs
----> 4 outputs = model(tokens_tensor, token_type_ids=segments_tensors)
5 # Transformers models always output tuples.
6 # See the models docstrings for the detail of all the outputs
//anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
539 result = self._slow_forward(*input, **kwargs)
540 else:
--> 541 result = self.forward(*input, **kwargs)
542 for hook in self._forward_hooks.values():
543 hook_result = hook(self, input, result)
//anaconda3/lib/python3.7/site-packages/transformers/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask)
712 head_mask = [None] * self.config.num_hidden_layers
713
--> 714 embedding_output = self.embeddings(input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds)
715 encoder_outputs = self.encoder(embedding_output,
716 attention_mask=extended_attention_mask,
//anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
539 result = self._slow_forward(*input, **kwargs)
540 else:
--> 541 result = self.forward(*input, **kwargs)
542 for hook in self._forward_hooks.values():
543 hook_result = hook(self, input, result)
//anaconda3/lib/python3.7/site-packages/transformers/modeling_bert.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds)
176 inputs_embeds = self.word_embeddings(input_ids)
177 position_embeddings = self.position_embeddings(position_ids)
--> 178 token_type_embeddings = self.token_type_embeddings(token_type_ids)
179
180 embeddings = inputs_embeds + position_embeddings + token_type_embeddings
//anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
539 result = self._slow_forward(*input, **kwargs)
540 else:
--> 541 result = self.forward(*input, **kwargs)
542 for hook in self._forward_hooks.values():
543 hook_result = hook(self, input, result)
//anaconda3/lib/python3.7/site-packages/torch/nn/modules/sparse.py in forward(self, input)
112 return F.embedding(
113 input, self.weight, self.padding_idx, self.max_norm,
--> 114 self.norm_type, self.scale_grad_by_freq, self.sparse)
115
116 def extra_repr(self):
//anaconda3/lib/python3.7/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1482 # remove once script supports set_grad_enabled
1483 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1484 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1485
1486
RuntimeError: index out of range: Tried to access index 2 out of table with 1 rows. at ../aten/src/TH/generic/THTensorEvenMoreMath.cpp:418
```
Any help will be greatly appreciated.
Thanks!
Info:
OS: MacOsX 10.14.6 (Mojave)
python : 3.7
PyTorch : 1.3.1
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1967/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/1967/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1966 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1966/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1966/comments | https://api.github.com/repos/huggingface/transformers/issues/1966/events | https://github.com/huggingface/transformers/pull/1966 | 529,243,593 | MDExOlB1bGxSZXF1ZXN0MzQ2MTU1NDM1 | 1,966 | Fix issue: #1962, input shape problem | {
"login": "billpku",
"id": 11024954,
"node_id": "MDQ6VXNlcjExMDI0OTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/11024954?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/billpku",
"html_url": "https://github.com/billpku",
"followers_url": "https://api.github.com/users/billpku/followers",
"following_url": "https://api.github.com/users/billpku/following{/other_user}",
"gists_url": "https://api.github.com/users/billpku/gists{/gist_id}",
"starred_url": "https://api.github.com/users/billpku/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/billpku/subscriptions",
"organizations_url": "https://api.github.com/users/billpku/orgs",
"repos_url": "https://api.github.com/users/billpku/repos",
"events_url": "https://api.github.com/users/billpku/events{/privacy}",
"received_events_url": "https://api.github.com/users/billpku/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1966?src=pr&el=h1) Report\n> Merging [#1966](https://codecov.io/gh/huggingface/transformers/pull/1966?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cc7968227e08858df4a5c618c739e1a3ca050196?src=pr&el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1966?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1966 +/- ##\n==========================================\n- Coverage 84.26% 84.24% -0.02% \n==========================================\n Files 104 104 \n Lines 15431 15431 \n==========================================\n- Hits 13003 13000 -3 \n- Misses 2428 2431 +3\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1966?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/1966/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX2FsYmVydC5weQ==) | `85.49% <100%> (ø)` | :arrow_up: |\n| [transformers/tests/modeling\\_tf\\_common\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1966/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2NvbW1vbl90ZXN0LnB5) | `95.63% <0%> (-1.46%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1966?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1966?src=pr&el=footer). Last update [cc79682...a1aec9c](https://codecov.io/gh/huggingface/transformers/pull/1966?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Great, thank you for this @billpku !"
] | 1,574 | 1,574 | 1,574 | CONTRIBUTOR | null | Hi,
To Fix #1962
The input's shape seem to cause error in 2.2.0 version tf_albert_model
Hope that it can help. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1966/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1966/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1966",
"html_url": "https://github.com/huggingface/transformers/pull/1966",
"diff_url": "https://github.com/huggingface/transformers/pull/1966.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1966.patch",
"merged_at": 1574869091000
} |
https://api.github.com/repos/huggingface/transformers/issues/1965 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1965/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1965/comments | https://api.github.com/repos/huggingface/transformers/issues/1965/events | https://github.com/huggingface/transformers/issues/1965 | 529,224,770 | MDU6SXNzdWU1MjkyMjQ3NzA= | 1,965 | XLMForTokenClassification | {
"login": "avacaondata",
"id": 35173563,
"node_id": "MDQ6VXNlcjM1MTczNTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/avacaondata",
"html_url": "https://github.com/avacaondata",
"followers_url": "https://api.github.com/users/avacaondata/followers",
"following_url": "https://api.github.com/users/avacaondata/following{/other_user}",
"gists_url": "https://api.github.com/users/avacaondata/gists{/gist_id}",
"starred_url": "https://api.github.com/users/avacaondata/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avacaondata/subscriptions",
"organizations_url": "https://api.github.com/users/avacaondata/orgs",
"repos_url": "https://api.github.com/users/avacaondata/repos",
"events_url": "https://api.github.com/users/avacaondata/events{/privacy}",
"received_events_url": "https://api.github.com/users/avacaondata/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi @alexvaca0 ,\r\n\r\nTry to remove the `d_model` parameter in the constructor. Use `config.emb_dim` (`emb_dim` is specified in the [xlm config](https://s3.amazonaws.com/models.huggingface.co/bert/xlm-mlm-en-2048-config.json)), so this should work:\r\n\r\n```python\r\nself.classifier = nn.Linear(config.emb_dim, config.num_labels)\r\n```\r\n\r\n:)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,574 | 1,580 | 1,580 | NONE | null | # 🌟New model addition
## Model description
As the XLM version for token classification (NER tasks) was not implemented, I followed a similar path as for BertForTokenClassification, and it seems to work. The reason for this is that BERT multilingual works horribly for WikiNER in spanish, achieving only 55% F1 with much fine-tuning, against the 87% of Spacy. Therefore, I'm trying to improve this metric and for that purpose I decided to use XLM, which is trained only on 15 different languages, not on more than 100. There's still one think that my model implementation lacks, and that is the fact that model dimension has to be set manually. I've been trying to add d_model to XLMConfig and then pass this config to my class, but it says XLMModel has no attribute d_model. If anyone can help me out with that I'd appreciate that.
<!-- Important information -->
The code:
class XLMForTokenClassification(XLMModel):
def __init__(self, config, d_model=1024):
super(XLMForTokenClassification, self).__init__(config)
self.num_labels = config.num_labels
self.xlm = XLMModel(config)
self.dropout = nn.Dropout(config.dropout)
self.classifier = nn.Linear(d_model, config.num_labels)
self.init_weights()
def forward(self, input_ids=None, attention_mask=None, langs=None, token_type_ids=None,
position_ids=None, head_mask=None, labels=None):
outputs = self.xlm(input_ids,
attention_mask=attention_mask,
langs=langs,
token_type_ids=position_ids,
head_mask=head_mask)
sequence_output = self.dropout(outputs[0])
logits = self.classifier(sequence_output)
outputs = (logits, ) + outputs[2:] #add hidden states and attention if they are here
if labels is not None:
loss_fct = nn.CrossEntropyLoss()
#only keep active parts of the loss
if attention_mask is not None:
active_loss = attention_mask.view(-1) == 1
active_logits = logits.view(-1, self.num_labels)[active_loss]
active_labels = labels.view(-1)[active_loss]
loss = loss_fct(active_logits, active_labels)
else:
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
outputs = (loss, ) + outputs
return outputs
## Open Source status
* [x] the model implementation is available: (give details)
It is uploaded above.
* [ ] the model weights are available: (give details)
* [x] who are the authors: (mention them)
Alejandro Vaca Serrano
## Additional context
<!-- Add any other context about the problem here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1965/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/1965/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1964 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1964/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1964/comments | https://api.github.com/repos/huggingface/transformers/issues/1964/events | https://github.com/huggingface/transformers/issues/1964 | 529,173,514 | MDU6SXNzdWU1MjkxNzM1MTQ= | 1,964 | How to increase model saving checkpoint from 50 to 1000? | {
"login": "snijesh",
"id": 25811390,
"node_id": "MDQ6VXNlcjI1ODExMzkw",
"avatar_url": "https://avatars.githubusercontent.com/u/25811390?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/snijesh",
"html_url": "https://github.com/snijesh",
"followers_url": "https://api.github.com/users/snijesh/followers",
"following_url": "https://api.github.com/users/snijesh/following{/other_user}",
"gists_url": "https://api.github.com/users/snijesh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/snijesh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/snijesh/subscriptions",
"organizations_url": "https://api.github.com/users/snijesh/orgs",
"repos_url": "https://api.github.com/users/snijesh/repos",
"events_url": "https://api.github.com/users/snijesh/events{/privacy}",
"received_events_url": "https://api.github.com/users/snijesh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"**You haven't to modify the source code in this script**. When you call `run_squad.py` script, you have to pass the `--save_steps` parameter and set its value to 1000 (as you can see [here](https://github.com/huggingface/transformers/blob/master/examples/run_squad.py#L429).) So, the entire command would be something like that: `python run_squad.py ... --save_steps 1000`\r\n\r\n> ## Questions & Help\r\n> When I run the script `run_squad.py` , it creates model check points every 50th iteration. How to increase model saving checkpoint from 50 to 1000. Where should I edit the code ?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Related to this question, how about making the default value (50) bigger (e.g., 1000) in scripts such as `run_squad.py` and `run_ner.py`?\r\nIf a `--save_steps` option is not specified, and the default value is used, many checkpoints are saved.",
"@tomohideshibata you're right that 50 is too low, I bumped it to 500 in 335dd5e.",
"Thanks!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,574 | 1,586 | 1,586 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
When I run the script `run_squad.py` , it creates model check points every 50th iteration. How to increase model saving checkpoint from 50 to 1000. Where should I edit the code ?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1964/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1964/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1963 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1963/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1963/comments | https://api.github.com/repos/huggingface/transformers/issues/1963/events | https://github.com/huggingface/transformers/issues/1963 | 529,149,916 | MDU6SXNzdWU1MjkxNDk5MTY= | 1,963 | Did the underlying pre-trained models change somehow? | {
"login": "jmwoloso",
"id": 7530947,
"node_id": "MDQ6VXNlcjc1MzA5NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7530947?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmwoloso",
"html_url": "https://github.com/jmwoloso",
"followers_url": "https://api.github.com/users/jmwoloso/followers",
"following_url": "https://api.github.com/users/jmwoloso/following{/other_user}",
"gists_url": "https://api.github.com/users/jmwoloso/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmwoloso/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmwoloso/subscriptions",
"organizations_url": "https://api.github.com/users/jmwoloso/orgs",
"repos_url": "https://api.github.com/users/jmwoloso/repos",
"events_url": "https://api.github.com/users/jmwoloso/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmwoloso/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Total user-error here. Made a change to a function that writes the TF Records to storage (the name of one of the features) and didn't propagate that info to the function i wrote that reads the TF Records back in, so it wasn't loading my input masks because it was looking for a key that didn't exist."
] | 1,574 | 1,574 | 1,574 | CONTRIBUTOR | null | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): Bert
Language I am using the model on (English, Chinese....): English
The problem arise when using:
* [ ] the official example scripts: (give details)
* [X] my own modified scripts: I'm just loading the pre-trained bert-base-uncased model
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: document classification
## To Reproduce
Steps to reproduce the behavior:
1. `PRETRAINED_WEIGHTS = "bert-base-uncased"`
2. `model = TFBertForSequenceClassification.from_pretrained(PRETRAINED_WEIGHTS)`
3. see just below
```
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
sparse_categorical_accuracy = tf.keras.metrics.SparseCategoricalAccuracy("train_accuracy")
model.compile(optimizer=optimizer,
loss=loss,
metrics=[sparse_categorical_accuracy])
```
4. see just below
```
history = model.fit([train_input_ids, train_input_masks, train_segment_ids],
train_labels,
epochs=EPOCHS,
batch_size=BATCH_SIZE,
validation_data=([val_input_ids, val_input_masks, val_segment_ids],
val_labels),
use_multiprocessing=True,
verbose=1)
```
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
## Expected behavior
That the model begins training.
## Environment
* OS: Databricks (linux?)
* Python version: 3.7
* PyTorch version: I'm using the TF-flavor models
* PyTorch Transformers version (or branch): v2.1.1 & v2.2.0 (see additional context)
* Using GPU ? Yes
* Distributed of parallel setup ? No
* Any other relevant information:
## Additional context
In databricks I was not pinning the transformers version so it was installing the latest. I have always pinned the `mlflow` version to 1.4.0 though. I tried with what the latest prior release of `transformers` was (2.1.1) and still got the same error where this worked flawlessly before. The error is below and it specifies it is an `mlflow` issue, though in reality I think it may have something to do with the pretrained model that is loaded when we specify `bert-base-uncased`. It seems this underlying model changed independently of the latest release of `transformers`? Are the pre-trained models from some public Google repository or are they Huggingface-specific?
Thanks again for supporting TF 2, this repo has been a blessing!
Traceback:
```
UserWarning: Logging to MLflow failed: Saving the model to HDF5 format requires the model to be a Functional model or a Sequential model. It does not work for subclassed models, because such models are defined via the body of a Python method, which isn't safely serializable. Consider saving to the Tensorflow SavedModel format (by setting save_format="tf") or using `save_weights`.
try_mlflow_log(mlflow.keras.log_model, self.model, artifact_path='model')
InvalidArgumentError: 2 root error(s) found.
(0) Invalid argument: Incompatible shapes: [4,12,512,512] vs. [4,1,1,0]
[[node tf_bert_for_sequence_classification/bert/encoder/layer_._0/attention/self/add (defined at /databricks/python/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py:1751) ]]
[[cluster_1_1/xla_compile]]
[[cluster_0_1/merge_oidx_1/_22]]
(1) Invalid argument: Incompatible shapes: [4,12,512,512] vs. [4,1,1,0]
[[node tf_bert_for_sequence_classification/bert/encoder/layer_._0/attention/self/add (defined at /databricks/python/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py:1751) ]]
[[cluster_1_1/xla_compile]]
0 successful operations.
0 derived errors ignored. [Op:__inference_distributed_function_37652]
Function call stack:
distributed_function -> distributed_function
```
<!-- Add any other context about the problem here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1963/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/1963/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1962 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1962/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1962/comments | https://api.github.com/repos/huggingface/transformers/issues/1962/events | https://github.com/huggingface/transformers/issues/1962 | 529,145,886 | MDU6SXNzdWU1MjkxNDU4ODY= | 1,962 | TFBertModel ValueError: Tried to convert 'dims' to a tensor and failed. Error: Cannot convert a partially known TensorShape to a Tensor | {
"login": "roccqqck",
"id": 34628766,
"node_id": "MDQ6VXNlcjM0NjI4NzY2",
"avatar_url": "https://avatars.githubusercontent.com/u/34628766?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/roccqqck",
"html_url": "https://github.com/roccqqck",
"followers_url": "https://api.github.com/users/roccqqck/followers",
"following_url": "https://api.github.com/users/roccqqck/following{/other_user}",
"gists_url": "https://api.github.com/users/roccqqck/gists{/gist_id}",
"starred_url": "https://api.github.com/users/roccqqck/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/roccqqck/subscriptions",
"organizations_url": "https://api.github.com/users/roccqqck/orgs",
"repos_url": "https://api.github.com/users/roccqqck/repos",
"events_url": "https://api.github.com/users/roccqqck/events{/privacy}",
"received_events_url": "https://api.github.com/users/roccqqck/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> The code below was fine when transformers 2.1.1\r\n> \r\n> but after I update to transformers 2.2.0\r\n> \r\n> ```\r\n> model = TFBertForSequenceClassification.from_pretrained('bert-base-chinese', num_labels=5)\r\n> model.summary()\r\n> \r\n> optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08, clipnorm=1.0)\r\n> loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)\r\n> metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy')\r\n> model.compile(optimizer=optimizer, loss=loss, metrics=[metric])\r\n> \r\n> model_fit = model.fit(train_input_ids, train_label, \r\n> batch_size=8, epochs=1, \r\n> validation_data=(validation_input_ids, validation_label)\r\n> )\r\n> ```\r\n> \r\n> ```\r\n> ValueError: Tried to convert 'dims' to a tensor and failed. \r\n> Error: Cannot convert a partially known TensorShape to a Tensor: (None, 512)\r\n> ```\r\n\r\nThe problem seem to cause by input's shape.\r\n\r\nFor transformers 2.2.0\r\nI fix it myself by modifying file \"transformers/modeling_tf_albert.py\"\r\nOn line 648, I change:\r\ninput_shape = input_ids.shape\r\nInto:\r\ninput_shape = tf.shape(input_ids)\r\nThen the problem fixed.\r\n\r\nFeel free to leave a comment if it work for you.",
"@billpku \r\nSorry to test it so lately.\r\n\r\nIt did fix my problem above.\r\nbut it didn't fix the code below which worked fine in 2.1.1 for custom the layer.\r\n\r\n```\r\ninput_layer = Input(shape = (512,), dtype='int64') \r\nbert = TFBertModel.from_pretrained('bert-base-chinese')(input_layer)\r\nbert = bert[0] \r\ndropout = Dropout(0.1)(bert)\r\nflat = Flatten()(dropout)\r\nclassifier = Dense(units=5)(flat) \r\nmodel = Model(inputs=input_layer, outputs=classifier)\r\nmodel.summary()\r\n```\r\n```\r\nValueError: Tried to convert 'dims' to a tensor and failed. \r\nError: Cannot convert a partially known TensorShape to a Tensor: (None, 512)\r\n```",
"Facing the same issue here.\r\n\r\n\r\n",
"@AmalVijayan \r\nI just checked my problem was fixed somehow.\r\n\r\nHow's yours now?\r\n\r\n\r\n> ```\r\n> input_layer = Input(shape = (512,), dtype='int64') \r\n> bert = TFBertModel.from_pretrained('bert-base-chinese')(input_layer)\r\n> bert = bert[0] \r\n> dropout = Dropout(0.1)(bert)\r\n> flat = Flatten()(dropout)\r\n> classifier = Dense(units=5)(flat) \r\n> model = Model(inputs=input_layer, outputs=classifier)\r\n> model.summary()\r\n> ```\r\n\r\n"
] | 1,574 | 1,575 | 1,574 | NONE | null | The code below was fine when transformers 2.1.1
but after I update to transformers 2.2.0
```
model = TFBertForSequenceClassification.from_pretrained('bert-base-chinese', num_labels=5)
model.summary()
optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08, clipnorm=1.0)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy')
model.compile(optimizer=optimizer, loss=loss, metrics=[metric])
model_fit = model.fit(train_input_ids, train_label,
batch_size=8, epochs=1,
validation_data=(validation_input_ids, validation_label)
)
```
```
ValueError: Tried to convert 'dims' to a tensor and failed.
Error: Cannot convert a partially known TensorShape to a Tensor: (None, 512)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1962/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1962/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1961 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1961/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1961/comments | https://api.github.com/repos/huggingface/transformers/issues/1961/events | https://github.com/huggingface/transformers/issues/1961 | 529,078,435 | MDU6SXNzdWU1MjkwNzg0MzU= | 1,961 | Problems with running 'run_lm_finetuning.py' with bert | {
"login": "bigzhouj",
"id": 29719942,
"node_id": "MDQ6VXNlcjI5NzE5OTQy",
"avatar_url": "https://avatars.githubusercontent.com/u/29719942?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bigzhouj",
"html_url": "https://github.com/bigzhouj",
"followers_url": "https://api.github.com/users/bigzhouj/followers",
"following_url": "https://api.github.com/users/bigzhouj/following{/other_user}",
"gists_url": "https://api.github.com/users/bigzhouj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bigzhouj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bigzhouj/subscriptions",
"organizations_url": "https://api.github.com/users/bigzhouj/orgs",
"repos_url": "https://api.github.com/users/bigzhouj/repos",
"events_url": "https://api.github.com/users/bigzhouj/events{/privacy}",
"received_events_url": "https://api.github.com/users/bigzhouj/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"@LysandreJik do we support this one?",
"We do support BERT in `run_lm_finetuning`, however, we do not support loading BERT checkpoints from the original BERT implementation.\r\n\r\nIf you wish to load a checkpoint that was pre-trained/fine-tuned using the original implementation (which seems to be what you're doing here), you can first convert to our implementation using [convert_bert_original_tf_checkpoint_to_pytorch](https://github.com/huggingface/transformers/blob/master/transformers/convert_bert_original_tf_checkpoint_to_pytorch.py), it will then be usable by `run_lm_finetuning`.\r\n\r\nIf you wish to use TensorFlow with the outputted model, you can use the script [convert_pytorch_checkpoint_to_tf2](https://github.com/huggingface/transformers/blob/master/transformers/convert_pytorch_checkpoint_to_tf2.py) which will convert the pytorch model back to tensorflow 2.",
"@LysandreJik - not related to fine-tuning but converting one of the fine-tuned (rum_lm_finetuning.py) model to tensorflow checkpoint. \r\n\r\nHere is the command I used:\r\nbin/python3.6 convert_pytorch_checkpoint_to_tf2.py --tf_dump_path=\"../tf_test/\" --model_type=\"bert\" --pytorch_checkpoint_path=\"../pytorch_model.bin\" --config_file='../config.json'\r\n\r\nHowever, it was throwing the below error (log and stack trace)\r\n\r\n Converting model type 1/1 bert\r\n Converting checkpoint 1/15: ../pytorch_model.bin - model_type bert\r\n\r\nBuilding TensorFlow model from configuration: {\r\n \"attention_probs_dropout_prob\": 0.1,\r\n \"finetuning_task\": null,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout_prob\": 0.1,\r\n \"hidden_size\": 768,\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 3072,\r\n \"is_decoder\": false,\r\n \"layer_norm_eps\": 1e-12,\r\n \"max_position_embeddings\": 512,\r\n \"num_attention_heads\": 12,\r\n \"num_hidden_layers\": 12,\r\n \"num_labels\": 2,\r\n \"output_attentions\": true,\r\n \"output_hidden_states\": true,\r\n \"output_past\": true,\r\n \"pruned_heads\": {},\r\n \"torchscript\": false,\r\n \"type_vocab_size\": 2,\r\n \"use_bfloat16\": false,\r\n \"vocab_size\": 28996\r\n}\r\n\r\nTraceback (most recent call last):\r\n File \"/home/imagen/skc/bert/transformers-2.2.1/transformers/convert_pytorch_checkpoint_to_tf2.py\", line 248, in <module>\r\n only_convert_finetuned_models=args.only_convert_finetuned_models)\r\n File \"/home/imagen/skc/bert/transformers-2.2.1/transformers/convert_pytorch_checkpoint_to_tf2.py\", line 194, in convert_all_pt_checkpoints_to_tf\r\n compare_with_pt_model=compare_with_pt_model)\r\n File \"/home/imagen/skc/bert/transformers-2.2.1/transformers/convert_pytorch_checkpoint_to_tf2.py\", line 115, in convert_pt_checkpoint_to_tf\r\n tf_model = load_pytorch_checkpoint_in_tf2_model(tf_model, pytorch_checkpoint_path)\r\n File \"/home/imagen/skc/environments/.virtualenvs/lstm_dev_tf2x/lib/python3.6/site-packages/transformers/modeling_tf_pytorch_utils.py\", line 82, in load_pytorch_checkpoint_in_tf2_model\r\n return load_pytorch_weights_in_tf2_model(tf_model, pt_state_dict, tf_inputs=tf_inputs, allow_missing_keys=allow_missing_keys)\r\n File \"/home/imagen/skc/environments/.virtualenvs/lstm_dev_tf2x/lib/python3.6/site-packages/transformers/modeling_tf_pytorch_utils.py\", line 145, in load_pytorch_weights_in_tf2_model\r\n assert name in pt_state_dict, \"{} not found in PyTorch model\".format(name)\r\n**AssertionError: cls.seq_relationship.weight not found in PyTorch model**\r\n\r\nCan you please explain what did go wrong with the conversion? This is one of the BERT-base fine-tuned model. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,574 | 1,581 | 1,581 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
My rough modification configuration (local)
root_path = "F://IdeaProjects/transformers"
bert_path = "F://BERT/chinese_L-12_H-768_A-12" (Downloaded from the 'https://storage.googleapis.com/bert_models/2018_11_03/chinese_L-12_H-768_A-12.zip')
## Required parameters
parser.add_argument("--train_data_file", default=os.path.join(root_path, "data/train.txt"),...)
parser.add_argument("--output_dir", default=os.path.join(root_path, "output")...)
## Other parameters
parser.add_argument("--eval_data_file", default=os.path.join(root_path, "data/dev.txt"),...)
parser.add_argument("--model_type", default="bert"...)
parser.add_argument("--model_name_or_path", default=bert_path, ...)
parser.add_argument("--mlm", action='store_true', default=True,...)
parser.add_argument("--mlm_probability", type=float, default=0.15,...)
parser.add_argument("--config_name", default=os.path.join(bert_path, "bert_config.json"),..)
parser.add_argument("--tokenizer_name", default=bert_path, ...)
- [ Is my configuration correct?]
I changed this line, 'model = model_class.from_pretrained(args.model_name_or_path,
from_tf=True,#bool('.ckpt' in args.model_name_or_path),' Because my filename does not have this character('.ckpt').
Then running 'run_lm_finetuning.py' causes the problem in the 'modeling_bert.py' file of the 'load_tf_weights_in_bert' method:
'BertOnlyMLMHead' object has no attribute 'bias'
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1961/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1961/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1960 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1960/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1960/comments | https://api.github.com/repos/huggingface/transformers/issues/1960/events | https://github.com/huggingface/transformers/issues/1960 | 529,069,815 | MDU6SXNzdWU1MjkwNjk4MTU= | 1,960 | Improving model saving and resuming | {
"login": "bilal2vec",
"id": 29356759,
"node_id": "MDQ6VXNlcjI5MzU2NzU5",
"avatar_url": "https://avatars.githubusercontent.com/u/29356759?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bilal2vec",
"html_url": "https://github.com/bilal2vec",
"followers_url": "https://api.github.com/users/bilal2vec/followers",
"following_url": "https://api.github.com/users/bilal2vec/following{/other_user}",
"gists_url": "https://api.github.com/users/bilal2vec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bilal2vec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bilal2vec/subscriptions",
"organizations_url": "https://api.github.com/users/bilal2vec/orgs",
"repos_url": "https://api.github.com/users/bilal2vec/repos",
"events_url": "https://api.github.com/users/bilal2vec/events{/privacy}",
"received_events_url": "https://api.github.com/users/bilal2vec/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think it's very useful this feature because, as you highlight, VMs and other variables could stop the (long) training process.\r\nTechnically, how could you implement this feature? Surrounding all the training code in a `try/except` statement and when it occurs a particular Exception (which one?) you ask the user whether he/she wants to save till now and after that you saved a file?\r\n\r\nIt could be necessary to write a method for saving the optimizer state, the scheduler and the tokenizer in a **standardized** way. Reading #1925 and #839, I understand that @thomwolf suggests to use standard PyTorch method for saving and loading e.g. scheduler.\r\n\r\n> ## Feature\r\n> The official transformer examples should make it easier to continue training a model that suddenly stopped (e.g. vm gets preempted in the middle of a training run).\r\n> \r\n> To do this, the examples should be updated to save the `optimizer` and `scheduler` states to the `output_dir` as well as the current epoch to disk at the end of each epoch in a `training_state.pt` file. This way, the user could choose to continue training from a previous model checkpoint, but would continue training from the saved epoch and would use the saved tokenizer, optimizer, and scheduler.\r\n> \r\n> ## Motivation\r\n> If your VM gets preempted in the middle of a training run, you won't be able to properly continue training the model since the scheduler will be reset and the current learning rate will be lost.\r\n> \r\n> ## Additional context\r\n> If anyone is interested in this, I can implement the feature and start a pull request.\r\n> \r\n> Related issues:\r\n> \r\n> * #1925\r\n> * #839",
"This would be very useful indeed. \r\n\r\nI would guess that when the VM gets preempted the process in which your program runs is sent a `SIGTERM` or `SIGKILL` signal from the OS. You would need to catch this signal and act accordingly. Look at the [signal module](https://docs.python.org/3/library/signal.html) in python's standard library.\r\n\r\nAn elegant and very general solution would be to define a context manager (`with ... do`) in which we would execute the training and that handles all the backup logic on `SIGTERM` or `SIGKILL`.\r\n\r\nDo you want to give it a shot and make a Pull Request? You can @ me when you have a first draft and I can have a look and give you feedback.",
"@rlouf I was thinking to just save the optimizer and scheduler whenever the model is saved. As for resuming, you could just load in the optimizer state, scheduler state, and current epoch from the checkpoint file when passing in `--model_name_or_path`\r\n\r\nI've got a basic implementation of this on my [fork](https://github.com/bkkaggle/transformers/tree/saving-and-resuming)\r\n",
"I've seen your changes in source code. I think this is the \"easiest\" way to handle this feature and I like it. Do you have tested your code with unit testing? I don't see any test suite. Only for being sure that it works as expected.\r\nN.B: I've left some comments under your changes in your repo, please read them.\r\n\r\n> @rlouf I was thinking to just save the optimizer and scheduler whenever the model is saved. As for resuming, you could just load in the optimizer state, scheduler state, and current epoch from the checkpoint file when passing in `--model_name_or_path`\r\n> \r\n> I've got a basic implementation of this on my [fork](https://github.com/bkkaggle/transformers/tree/saving-and-resuming)",
"@bkkaggle Definitely the easiest solution if you don't mind resuming from the last checkpoint---mine was a bit heavy-duty :)\r\n\r\nI also agree with @TheEdoardo93; Can you rebase your branch on the repo's `master` and open a pull request (if you haven't done so already)? I'm happy to have a closer look.\r\n",
"I've updated my branch and submitted a [WIP] [pull request](https://github.com/huggingface/transformers/pull/1987)"
] | 1,574 | 1,576 | 1,576 | CONTRIBUTOR | null | ## 🚀 Feature
The official transformer examples should make it easier to continue training a model that suddenly stopped (e.g. vm gets preempted in the middle of a training run).
To do this, the examples should be updated to save the `optimizer` and `scheduler` states to the `output_dir` as well as the current epoch to disk at the end of each epoch in a `training_state.pt` file. This way, the user could choose to continue training from a previous model checkpoint, but would continue training from the saved epoch and would use the saved tokenizer, optimizer, and scheduler.
## Motivation
If your VM gets preempted in the middle of a training run, you won't be able to properly continue training the model since the scheduler will be reset and the current learning rate will be lost.
## Additional context
If anyone is interested in this, I can implement the feature and start a pull request.
Related issues:
- https://github.com/huggingface/transformers/issues/1925
- https://github.com/huggingface/transformers/issues/839 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1960/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1960/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1959 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1959/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1959/comments | https://api.github.com/repos/huggingface/transformers/issues/1959/events | https://github.com/huggingface/transformers/pull/1959 | 529,041,623 | MDExOlB1bGxSZXF1ZXN0MzQ1OTkyNjE1 | 1,959 | update Roberta checkpoint conversion | {
"login": "armancohan",
"id": 6425112,
"node_id": "MDQ6VXNlcjY0MjUxMTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6425112?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/armancohan",
"html_url": "https://github.com/armancohan",
"followers_url": "https://api.github.com/users/armancohan/followers",
"following_url": "https://api.github.com/users/armancohan/following{/other_user}",
"gists_url": "https://api.github.com/users/armancohan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/armancohan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/armancohan/subscriptions",
"organizations_url": "https://api.github.com/users/armancohan/orgs",
"repos_url": "https://api.github.com/users/armancohan/repos",
"events_url": "https://api.github.com/users/armancohan/events{/privacy}",
"received_events_url": "https://api.github.com/users/armancohan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1959?src=pr&el=h1) Report\n> Merging [#1959](https://codecov.io/gh/huggingface/transformers/pull/1959?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5e289f69bc564c94132f77c89a34e5f1dd69a592?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1959?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1959 +/- ##\n=======================================\n Coverage 81.47% 81.47% \n=======================================\n Files 122 122 \n Lines 18342 18342 \n=======================================\n Hits 14945 14945 \n Misses 3397 3397\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1959?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1959?src=pr&el=footer). Last update [5e289f6...5190320](https://codecov.io/gh/huggingface/transformers/pull/1959?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks a lot @armancohan \r\n\r\nI think we'll want to keep the possibility to import both types of models for backward compatibility. Can you add a switch based on an identification of the model type?",
"@armancohan (rebased on top of master so I force-pushed to your fork)",
"So, the fairseq weights themselves didn't change, it's the multiheadattention API that did, in fairseq `0.9.0`. So I'll just check that the fairseq version is >= 0.9 in the script.\r\n\r\nI've also updated the script to not hardcode the vocab length, which should make it compatible with other roberta-like fairseq models such as CamemBERT + XLM-R out of the box.\r\n\r\ncc @myleott @louismartin ",
"thanks @julien-c "
] | 1,574 | 1,576 | 1,576 | CONTRIBUTOR | null | - update to fix fairseq Roberta checkpoint conversion
Fairseq had removed `in_proj_weight` and `in_proj_bias` from the self attention module:
https://github.com/pytorch/fairseq/commit/4c6b689eebe66a53717dacf28cba7a11b6ffa64f
- create save directory if not exist | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1959/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1959/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1959",
"html_url": "https://github.com/huggingface/transformers/pull/1959",
"diff_url": "https://github.com/huggingface/transformers/pull/1959.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1959.patch",
"merged_at": 1576624343000
} |
https://api.github.com/repos/huggingface/transformers/issues/1958 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1958/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1958/comments | https://api.github.com/repos/huggingface/transformers/issues/1958/events | https://github.com/huggingface/transformers/issues/1958 | 529,012,165 | MDU6SXNzdWU1MjkwMTIxNjU= | 1,958 | run_ner.py --do_predict inference mode errors. Right data format? | {
"login": "zampierimatteo91",
"id": 40203129,
"node_id": "MDQ6VXNlcjQwMjAzMTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/40203129?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zampierimatteo91",
"html_url": "https://github.com/zampierimatteo91",
"followers_url": "https://api.github.com/users/zampierimatteo91/followers",
"following_url": "https://api.github.com/users/zampierimatteo91/following{/other_user}",
"gists_url": "https://api.github.com/users/zampierimatteo91/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zampierimatteo91/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zampierimatteo91/subscriptions",
"organizations_url": "https://api.github.com/users/zampierimatteo91/orgs",
"repos_url": "https://api.github.com/users/zampierimatteo91/repos",
"events_url": "https://api.github.com/users/zampierimatteo91/events{/privacy}",
"received_events_url": "https://api.github.com/users/zampierimatteo91/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Three questions:\r\n- the error occurs at line 522, so at line 507 you've saved the file called `test_results.txt`. Do you see the content of this file and whether is it correct?\r\n- the input file has been formatted as CoNLL-2003?\r\n- **N.B**: moreover, the code block from line 512 to line 525 save to .txt file the predictions obtained from the NER. But already at line 505 you have the `predictions` variable. Have you seen the content of this variable? Maybe, it is only a saving problem.\r\n\r\n> ## Questions & Help\r\n> Hello again,\r\n> \r\n> I'm here to bother you one more time.\r\n> \r\n> I fine-tuned preloaded BioBERT weights on a custom dataset to run biomedical NER.\r\n> \r\n> Now I want to use the model for inference mode on a 'raw' set of documents. I renamed this set 'test.txt' and formatted it the following way (documents are separated by '-DOCSTART- (num_doc)' lines):\r\n> \r\n> ```\r\n> to O\r\n> be O\r\n> referred O\r\n> to O\r\n> the O\r\n> location O\r\n> of O\r\n> the O\r\n> disease O\r\n> in O\r\n> the O\r\n> skeletal O\r\n> structures O\r\n> examined O\r\n> ; O\r\n> \r\n> unchanged O\r\n> the O\r\n> areas O\r\n> of O\r\n> bone O\r\n> rarefaction O\r\n> reported O\r\n> to O\r\n> the O\r\n> sternum O\r\n> as O\r\n> a O\r\n> result O\r\n> of O\r\n> median O\r\n> sternotomy O\r\n> . O\r\n> ```\r\n> \r\n> I had to add the 'fake' labels on the right and place a space \" \" between col1 and col2.\r\n> \r\n> The error I now get is:\r\n> \r\n> ```\r\n> Traceback (most recent call last):\r\n> File \"run_ner.py\", line 531, in <module>\r\n> main()\r\n> File \"run_ner.py\", line 522, in main\r\n> output_line = line.split()[0] + \" \" + predictions[example_id].pop(0) + \"\\n\"\r\n> IndexError: list index out of range\r\n> ```\r\n> \r\n> Many thanks again.",
"Ciao @TheEdoardo93 ,\r\nThanks for your support!\r\n\r\n - I formatted the test set trying to follow the indications from the tutorial on the german-eval, with the first column being the token and the second being the B-I-O tags (in this set it's just a pile of Os to fill the column). They are space-separated.\r\n- `test_results.txt` is saved and shows precision, recall, f-1, and loss. All are terrible of course, as the test set was actually filled with the dummy BIO tags.\r\n- `test_predictions.txt` is truncated after about 50 lines of token+BIO prediction.\r\n\r\nI'm now trying to print the content of `predictions`, I'll let you know.",
"I wait your `predictions` variable content :D\r\nWe can implement a saving method that works as we expect (and not using the code lines in the `run_ner.py` script and see what happens!)\r\n\r\n> Ciao @TheEdoardo93 ,\r\n> Thanks for your support!\r\n> \r\n> * I formatted the test set trying to follow the indications from the tutorial on the german-eval, with the first column being the token and the second being the B-I-O tags (in this set it's just a pile of Os to fill the column). They are space-separated.\r\n> * `test_results.txt` is saved and shows precision, recall, f-1, and loss. All are terrible of course, as the test set was actually filled with the dummy BIO tags.\r\n> * `test_predictions.txt` is truncated after about 50 lines of token+BIO prediction.\r\n> \r\n> I'm now trying to print the content of `predictions`, I'll let you know.",
"I'm back.\r\n\r\nWhat I did was: changed columns' separation from tab to space (I was wrong in the previous comment, I thought I already changed it).\r\n\r\nNow the code runs properly and `test_predictions.txt` is complete.\r\nThis is a snapshot of `print(predictions)`:\r\n```\r\n[['O', 'O', 'B-Organism_subdivision', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-Organ', 'O', 'B-Organ', 'O', 'B-Multi-tissue_structure', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-Organism_subdivision', 'I-Organism_subdivision', 'O', 'O', 'O', 'O', 'O', 'B-Cancer', 'I-Cancer', 'O'], ['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O'], ['O', 'O', 'B-Immaterial_anatomical_entity', 'O', 'O', 'O', 'O', 'O', 'O', 'O'], ['O', 'O', 'O', 'O', 'O', 'O', 'B-Multi-tissue_structure', 'B-Multi-tissue_structure', 'O', 'O', 'O', 'O', 'O', 'O', 'O'], ['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O'], ['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-Immaterial_anatomical_entity', 'O', 'O'], ..., ['O', 'O', 'O', 'O', 'B-Organ', 'O', 'B-Organ', 'O', 'B-Organ', 'O', 'B-Organ', 'O'], ['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-Tissue', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-Tissue', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-Tissue', 'O', 'O', 'O', 'O', 'B-Multi-tissue_structure', 'O', 'O', 'O', 'O', 'O', 'O', 'O']]\r\n```\r\nThere is another minor issue I guess, i.e. a long series of warnings about no predictions because of exceeded maximum sequence length. The non-predicted tokens appear to be not really relevant for my final interest, but I'd like to have a complete output nonetheless.\r\nI will try to place a newline not only after usual end-of-sentence punctuation (.!?), but also after semi-colons and colons, in order to split each document in more pieces.\r\nIs it a strategy that makes sense or have I misinterpreted the meaning of the maximum sequence length?\r\n",
"Do you have sentences with length greater than **512 tokens**? BioBERT allows to have sentences with 512 tokens length maximum, as stated in the [paper](https://watermark.silverchair.com/btz682.pdf?token=AQECAHi208BE49Ooan9kkhW_Ercy7Dm3ZL_9Cf3qfKAc485ysgAAApcwggKTBgkqhkiG9w0BBwagggKEMIICgAIBADCCAnkGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMYCT-oIlIkXngiEprAgEQgIICSsx4GD3tbVdWpu1VPO7avpF_9v-YeT2MtTj6ysC7p_RRxtqG74n5C_tpst2LvM8UKCDfiuvU4bWi8PfJiJfKoQiWwR4AH4K0JC2m-Q8YC2V8_Jfuk-AL_CR8oWJc2U40FdB2fyV2tdoYxV0v1A35Qjg6PEujCkc3ztZqcGctW1_awddqskqkGF8Oz02kiwFQyHHlMPuRAewMopnxii_Pqo-nSNynCl03WCCGUKPbkC-vbPwIjo7vjz-opJQaNcTrNOLI8xPzQm3qT5_R85w3mm-CpHbo2rj4LW7YkJrswc7Z4KOlEfdq7AC5WkiIYhYqyauVLTDNzVYwSYJ_L6RsPeNlfxv3rm71J7fppWO_fu4Mbn8vnzmjKS0nqxdEbRcI4bGkpkjCvW-sVa3FIcRbNlOp_fH_PTeMf3VwIR_wGR0Nrw_80_BMzqy774SB1LitxarWsA7h3dU7Gp1f162TloTdqISAsTzfJJSTa4YVU2qHDp2iRzghvsBlXGhtuuiNkLQ_TblRFq3hdMpLtpHH5KlfahZ0tMvfBvbc_YGLi-9U5NmQbUnM0unhb73mQ5SneLAAD9JlLQv-4pXwYDIGi9ekn5G2RwueTOKSiKji8dm1rCtmUFXVL56WsPUdNkgJROoqGCC87_iVdV95TjpL7jVvNfOX8Bvh1eF_iCGyfrsKyK1aDpvY8B4vt3uUJowPlFjDo21AXOe53aAgnb9yay-t53WzmTNw-Q6lfZNiWsSQn9H1cUi7g8P5bRruZkmL8HaYlZje8TVNIn4).\r\n> The maximum sequence length was fixed to 512\r\n\r\n If you have sentences with more than 512 tokens, you have to apply different workaround, e.g. splitting a sentence length 1024 in two different sentences of 512 length and combine in some manner their output.\r\n\r\nHowever, the strategy you've proposed (e.g. split by comma, dot, semi-column, etc.) works! Try to follow this approach and share the results with us! I suggest you to do a visual evaluation/comparison between the current output and the output you'll obtain with the strategy highlight by you.\r\n\r\n> I'm back.\r\n> \r\n> What I did was: changed columns' separation from tab to space (I was wrong in the previous comment, I thought I already changed it).\r\n> \r\n> Now the code runs properly and `test_predictions.txt` is complete.\r\n> This is a snapshot of `print(predictions)`:\r\n> \r\n> ```\r\n> [['O', 'O', 'B-Organism_subdivision', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-Organ', 'O', 'B-Organ', 'O', 'B-Multi-tissue_structure', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-Organism_subdivision', 'I-Organism_subdivision', 'O', 'O', 'O', 'O', 'O', 'B-Cancer', 'I-Cancer', 'O'], ['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O'], ['O', 'O', 'B-Immaterial_anatomical_entity', 'O', 'O', 'O', 'O', 'O', 'O', 'O'], ['O', 'O', 'O', 'O', 'O', 'O', 'B-Multi-tissue_structure', 'B-Multi-tissue_structure', 'O', 'O', 'O', 'O', 'O', 'O', 'O'], ['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O'], ['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-Immaterial_anatomical_entity', 'O', 'O'], ..., ['O', 'O', 'O', 'O', 'B-Organ', 'O', 'B-Organ', 'O', 'B-Organ', 'O', 'B-Organ', 'O'], ['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-Tissue', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-Tissue', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-Tissue', 'O', 'O', 'O', 'O', 'B-Multi-tissue_structure', 'O', 'O', 'O', 'O', 'O', 'O', 'O']]\r\n> ```\r\n> \r\n> There is another minor issue I guess, i.e. a long series of warnings about no predictions because of exceeded maximum sequence length. The non-predicted tokens appear to be not really relevant for my final interest, but I'd like to have a complete output nonetheless.\r\n> I will try to place a newline not only after usual end-of-sentence punctuation (.!?), but also after semi-colons and colons, in order to split each document in more pieces.\r\n> Is it a strategy that makes sense or have I misinterpreted the meaning of the maximum sequence length?",
"Quite funnily, now a lot more tokens are without predictions.\r\nWhat I did was just adding a newline after each semicolon with `sed`.\r\n\r\nA question that I thought was easy to answer to: what constitutes a sequences in BERT relative to this task? Is it a sequence of tokens between empty lines? Or between defined punctuation?",
"Taken from the official BERT [paper](https://arxiv.org/pdf/1810.04805.pdf):\r\n\r\n> Throughout this work, a “sentence” can be an arbitrary span of contiguous text, rather than an actual linguistic sentence. A “sequence” refers to the input token sequence to BERT, which may be a single sentence or two sentences packed together.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,574 | 1,580 | 1,580 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
Hello again,
I'm here to bother you one more time.
I fine-tuned preloaded BioBERT weights on a custom dataset to run biomedical NER.
Now I want to use the model for inference mode on a 'raw' set of documents. I renamed this set 'test.txt' and formatted it the following way (documents are separated by '-DOCSTART- (num_doc)' lines):
```
to O
be O
referred O
to O
the O
location O
of O
the O
disease O
in O
the O
skeletal O
structures O
examined O
; O
unchanged O
the O
areas O
of O
bone O
rarefaction O
reported O
to O
the O
sternum O
as O
a O
result O
of O
median O
sternotomy O
. O
```
I had to add the 'fake' labels on the right and place a space " " between col1 and col2.
The error I now get is:
```
Traceback (most recent call last):
File "run_ner.py", line 531, in <module>
main()
File "run_ner.py", line 522, in main
output_line = line.split()[0] + " " + predictions[example_id].pop(0) + "\n"
IndexError: list index out of range
```
Many thanks again. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1958/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1958/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1957 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1957/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1957/comments | https://api.github.com/repos/huggingface/transformers/issues/1957/events | https://github.com/huggingface/transformers/issues/1957 | 529,007,087 | MDU6SXNzdWU1MjkwMDcwODc= | 1,957 | Do we need to add [CLS] and [SEP] for BertForMaskedLM ? | {
"login": "yaolu",
"id": 8982361,
"node_id": "MDQ6VXNlcjg5ODIzNjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8982361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yaolu",
"html_url": "https://github.com/yaolu",
"followers_url": "https://api.github.com/users/yaolu/followers",
"following_url": "https://api.github.com/users/yaolu/following{/other_user}",
"gists_url": "https://api.github.com/users/yaolu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yaolu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yaolu/subscriptions",
"organizations_url": "https://api.github.com/users/yaolu/orgs",
"repos_url": "https://api.github.com/users/yaolu/repos",
"events_url": "https://api.github.com/users/yaolu/events{/privacy}",
"received_events_url": "https://api.github.com/users/yaolu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"like this ? \r\n```\r\ninput_ids = torch.tensor(tokenizer.encode(\"Hello, my dog is cute\", add_special_tokens=True)).unsqueeze(0) # Batch size 1 \r\n```",
"The output **without** `add_special_tokens=True`:\r\n```\r\nimport torch\r\nfrom transformers import BertTokenizer, BertForMaskedLM\r\ntokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\ntext = 'Hello, my dog is cute'\r\ninput_ids = torch.tensor(tokenizer.encode(text)).unsqueeze(0)\r\n>>> tensor([[ 101, 7592, 1010, 2026, 3899, 2003, 10140, 102]])\r\n```\r\nIf you inspect the vocabulary built by BertTokenizer (accessible through `tokenizer.vocab`), you can see that the token [CLS] and [SEP] have ID 101 and 102, respectively. So, `tokenizer.encode` already add these two tokens at the start and at the end of each encoded sentence.\r\n\r\nThe output **with** `add_special_tokens=True`:\r\n```\r\nimport torch\r\nfrom transformers import BertTokenizer\r\ntokenizer = BertTokenizer.from_pretrained('bert-base-uncased', add_special_tokens=True)\r\ntext = 'Hello, my dog is cute'\r\ninput_ids = torch.tensor(tokenizer.encode(text)).unsqueeze(0)\r\n>>> tensor([[ 101, 7592, 1010, 2026, 3899, 2003, 10140, 102]])\r\n```\r\n\r\nAs you can see, **the output obtained is the same**.\r\n\r\nMoreover, this night Transformers passed from 2.1.1 to 2.2.0 version, and reading [here](https://github.com/huggingface/transformers/releases) we can see the statement **Tokenizers now add special tokens by default.**.\r\n\r\n> ## Questions & Help\r\n> https://github.com/huggingface/transformers/blob/cc7968227e08858df4a5c618c739e1a3ca050196/transformers/modeling_bert.py#L837-L841\r\n> \r\n> Seems like the example is wrong?",
"Yes, my version v2.1.1 will not set add_special_tokens to True by default. Thanks for your comment.\r\n\r\n> like this ?\r\n> \r\n> ```\r\n> input_ids = torch.tensor(tokenizer.encode(\"Hello, my dog is cute\", add_special_tokens=True)).unsqueeze(0) # Batch size 1 \r\n> ```",
"> The output **without** `add_special_tokens=True`:\r\n> \r\n> ```\r\n> import torch\r\n> from transformers import BertTokenizer, BertForMaskedLM\r\n> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\n> text = 'Hello, my dog is cute'\r\n> input_ids = torch.tensor(tokenizer.encode(text)).unsqueeze(0)\r\n> >>> tensor([[ 101, 7592, 1010, 2026, 3899, 2003, 10140, 102]])\r\n> ```\r\n> \r\n> If you inspect the vocabulary built by BertTokenizer (accessible through `tokenizer.vocab`), you can see that the token [CLS] and [SEP] have ID 101 and 102, respectively. So, `tokenizer.encode` already add these two tokens at the start and at the end of each encoded sentence.\r\n> \r\n> The output **with** `add_special_tokens=True`:\r\n> \r\n> ```\r\n> import torch\r\n> from transformers import BertTokenizer\r\n> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', add_special_tokens=True)\r\n> text = 'Hello, my dog is cute'\r\n> input_ids = torch.tensor(tokenizer.encode(text)).unsqueeze(0)\r\n> >>> tensor([[ 101, 7592, 1010, 2026, 3899, 2003, 10140, 102]])\r\n> ```\r\n> \r\n> As you can see, **the output obtained is the same**.\r\n> \r\n> Moreover, this night Transformers passed from 2.1.1 to 2.2.0 version, and reading [here](https://github.com/huggingface/transformers/releases) we can see the statement **Tokenizers now add special tokens by default.**.\r\n> \r\n> > ## Questions & Help\r\n> > https://github.com/huggingface/transformers/blob/cc7968227e08858df4a5c618c739e1a3ca050196/transformers/modeling_bert.py#L837-L841\r\n> > \r\n> > Seems like the example is wrong?\r\n\r\ncreate pull request #1971 to make this less ambiguous across different versions. "
] | 1,574 | 1,574 | 1,574 | CONTRIBUTOR | null | ## ❓ Questions & Help
https://github.com/huggingface/transformers/blob/cc7968227e08858df4a5c618c739e1a3ca050196/transformers/modeling_bert.py#L837-L841
Seems like the example is wrong? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1957/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1957/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1956 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1956/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1956/comments | https://api.github.com/repos/huggingface/transformers/issues/1956/events | https://github.com/huggingface/transformers/issues/1956 | 528,922,526 | MDU6SXNzdWU1Mjg5MjI1MjY= | 1,956 | get_linear_schedule_with_warmup Scheduler | {
"login": "h56cho",
"id": 52889259,
"node_id": "MDQ6VXNlcjUyODg5MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h56cho",
"html_url": "https://github.com/h56cho",
"followers_url": "https://api.github.com/users/h56cho/followers",
"following_url": "https://api.github.com/users/h56cho/following{/other_user}",
"gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h56cho/subscriptions",
"organizations_url": "https://api.github.com/users/h56cho/orgs",
"repos_url": "https://api.github.com/users/h56cho/repos",
"events_url": "https://api.github.com/users/h56cho/events{/privacy}",
"received_events_url": "https://api.github.com/users/h56cho/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You might be using an old version of the library, try updating it to v2.2.0",
"You're trying to import a method only available in more recent Transformers versions from a (very old) Transformers version called _pytorch-transformers_.\r\n\r\nWith Transformers 2.1.1 (the second recent one) and the new version 2.2.0, you can import correctly the `get_linear_schedule_with_warmup`. In fact, Transformers modifies its source code for what concern the optimization process (e.g. learning rate). You can see the changes [here](https://github.com/huggingface/transformers/commit/022525b0031bcdbbb62d1223f75919983f2ac426).\r\n\r\n> Hello,\r\n> \r\n> When I try to execute the line of code below, Python gives me an import error:\r\n> \r\n> ```js\r\n> from pytorch_transformers import (GPT2Config, GPT2LMHeadModel, GPT2DoubleHeadsModel,\r\n> AdamW, get_linear_schedule_with_warmup)\r\n> \r\n> ImportError: cannot import name 'get_linear_schedule_with_warmup' from 'pytorch_transformers' (/Users/hyunjindominiquecho/opt/anaconda3/lib/python3.7/site-packages/pytorch_transformers/__init__.py)\r\n> ```\r\n> \r\n> What should I then import to use the linear scheduler with warm up?\r\n> \r\n> Thank you,",
"You should use the `transformers` library instead of the `pytorch_transformers`. The `get_linear_schedule_with_warmup` is only defined in the former, in its latest version.",
"Thank you all,",
"Hello,\r\n\r\nSo I installed transformers 2.2.0,\r\n```\r\npip install transformers\r\n```\r\nand tried to import the same things:\r\n```js\r\nfrom transformers import (GPT2Config, GPT2LMHeadModel, GPT2DoubleHeadsModel,\r\n AdamW, get_linear_schedule_with_warmup) \r\n```\r\n\r\nand it's still giving me the same error:\r\n```\r\nfrom transformers import (GPT2Config, GPT2LMHeadModel, GPT2DoubleHeadsModel,\r\n AdamW, get_linear_schedule_with_warmup)\r\n2019-11-27 08:40:15.940560: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA\r\n2019-11-27 08:40:15.954925: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7fe20d2f5a50 executing computations on platform Host. Devices:\r\n2019-11-27 08:40:15.954938: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Host, Default Version\r\nTraceback (most recent call last):\r\n\r\n File \"<ipython-input-1-99af21631e15>\", line 1, in <module>\r\n from transformers import (GPT2Config, GPT2LMHeadModel, GPT2DoubleHeadsModel,\r\n\r\nImportError: cannot import name 'get_linear_schedule_with_warmup' from 'transformers' (/Users/hyunjindominiquecho/opt/anaconda3/lib/python3.7/site-packages/transformers/__init__.py)\r\n```\r\n\r\nWhat should I do to be able to use the linear warm up scheduler?\r\n\r\nThank you,",
"It's very strange.. In my environment works as expected the import statement.\r\n```\r\n> import transformers\r\n> transformers.__version__\r\n>>> '2.2.0'\r\n> from transformers import get_linear_schedule_with_warmup\r\n> ...\r\n```\r\nPlease, share with us your **OS** and your **Python version**.\r\n\r\n> Hello,\r\n> \r\n> So I installed transformers 2.2.0,\r\n> \r\n> ```\r\n> pip install transformers\r\n> ```\r\n> \r\n> and tried to import the same things:\r\n> \r\n> ```js\r\n> from transformers import (GPT2Config, GPT2LMHeadModel, GPT2DoubleHeadsModel,\r\n> AdamW, get_linear_schedule_with_warmup) \r\n> ```\r\n> \r\n> and it's still giving me the same error:\r\n> \r\n> ```\r\n> from transformers import (GPT2Config, GPT2LMHeadModel, GPT2DoubleHeadsModel,\r\n> AdamW, get_linear_schedule_with_warmup)\r\n> 2019-11-27 08:40:15.940560: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA\r\n> 2019-11-27 08:40:15.954925: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7fe20d2f5a50 executing computations on platform Host. Devices:\r\n> 2019-11-27 08:40:15.954938: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Host, Default Version\r\n> Traceback (most recent call last):\r\n> \r\n> File \"<ipython-input-1-99af21631e15>\", line 1, in <module>\r\n> from transformers import (GPT2Config, GPT2LMHeadModel, GPT2DoubleHeadsModel,\r\n> \r\n> ImportError: cannot import name 'get_linear_schedule_with_warmup' from 'transformers' (/Users/hyunjindominiquecho/opt/anaconda3/lib/python3.7/site-packages/transformers/__init__.py)\r\n> ```\r\n> \r\n> What should I do to be able to use the linear warm up scheduler?\r\n> \r\n> Thank you,",
"Hello,\r\n\r\nI tried uninstalling transformers and install the module again and it works now.\r\n\r\nThank you for all your help,"
] | 1,574 | 1,574 | 1,574 | NONE | null | Hello,
When I try to execute the line of code below, Python gives me an import error:
```js
from pytorch_transformers import (GPT2Config, GPT2LMHeadModel, GPT2DoubleHeadsModel,
AdamW, get_linear_schedule_with_warmup)
ImportError: cannot import name 'get_linear_schedule_with_warmup' from 'pytorch_transformers' (/Users/hyunjindominiquecho/opt/anaconda3/lib/python3.7/site-packages/pytorch_transformers/__init__.py)
```
What should I then import to use the linear scheduler with warm up?
Thank you, | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1956/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1956/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1955 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1955/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1955/comments | https://api.github.com/repos/huggingface/transformers/issues/1955/events | https://github.com/huggingface/transformers/issues/1955 | 528,908,130 | MDU6SXNzdWU1Mjg5MDgxMzA= | 1,955 | run_squad.py crashes during do_eval | {
"login": "Phirefly9",
"id": 16687050,
"node_id": "MDQ6VXNlcjE2Njg3MDUw",
"avatar_url": "https://avatars.githubusercontent.com/u/16687050?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Phirefly9",
"html_url": "https://github.com/Phirefly9",
"followers_url": "https://api.github.com/users/Phirefly9/followers",
"following_url": "https://api.github.com/users/Phirefly9/following{/other_user}",
"gists_url": "https://api.github.com/users/Phirefly9/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Phirefly9/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Phirefly9/subscriptions",
"organizations_url": "https://api.github.com/users/Phirefly9/orgs",
"repos_url": "https://api.github.com/users/Phirefly9/repos",
"events_url": "https://api.github.com/users/Phirefly9/events{/privacy}",
"received_events_url": "https://api.github.com/users/Phirefly9/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"same issue",
"I tracked it down further this morning and found the problem, you cannot run do_eval in pytorch distributed mode, do_eval works completely fine when there is no pytorch distributed in the equation. This should probably result in a change to the README\r\n\r\n",
"👍Just ran it, I confirm that do_eval runs well without distributed mode",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,574 | 1,580 | 1,580 | NONE | null | ## 🐛 Bug
When running run_squad.py as provided in the README once training is complete the prediction/evaluation component of the script crashes.
No prediction files are written are written.
Model I am using (Bert, XLNet....): Bert
Language I am using the model on (English, Chinese....): English
The problem arise when using:
* [ ] the official example scripts: (give details) example squad fine tuning
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name) official squad dev set 1.1
* [ ] my own task or dataset: (give details) the checkpoint was made using a custom training dataset in squad format, but it appears to be an eval bug
## To Reproduce
Steps to reproduce the behavior:
1. finish training as specified in README
2. I ran with this command
CUDA_VISIBLE_DEVICES=10,11,12,13,14,15 python -m torch.distributed.launch --nproc_per_node=6 ./examples/run_squad.py --model_type bert --model_name_or_path bert-large-uncased-whole-word-masking --do_train --do_eval --do_lower_case --train_file /data/data/SQUAD/train-v1.1json --predict_file /data/data/SQUAD/dev-v1.1.json --learning_rate 3e-5 --num_train_epochs 2 --max_seq_length 384 --doc_stride 128 --output_dir models/wwm_uncased_finetuned_squad_supp/ --per_gpu_eval_batch_size=6 --per_gpu_train_batch_size=6 --save_steps 500
I also ran with just do_eval using the same model and it produced the same error
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
11/26/2019 18:29:01 - INFO - __main__ - Saving features into cached file /data/data/SQUAD/cached_dev_bert-large-uncased-whole-word-masking_384
11/26/2019 18:29:20 - INFO - __main__ - ***** Running evaluation *****
11/26/2019 18:29:20 - INFO - __main__ - Num examples = 10833
11/26/2019 18:29:20 - INFO - __main__ - Batch size = 6
Evaluating: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 301/301 [00:48<00:00, 6.20it/s]
11/26/2019 18:30:09 - INFO - __main__ - Evaluation done in total 48.868770 secs (0.004511 sec per example)
11/26/2019 18:30:09 - INFO - utils_squad - Writing predictions to: models/wwm_uncased_finetuned_squad_supp/predictions_.json
11/26/2019 18:30:09 - INFO - utils_squad - Writing nbest to: models/wwm_uncased_finetuned_squad_supp/nbest_predictions_.json
Traceback (most recent call last):
File "./examples/run_squad.py", line 573, in <module>
main()
File "./examples/run_squad.py", line 562, in main
result = evaluate(args, model, tokenizer, prefix=global_step)
File "./examples/run_squad.py", line 284, in evaluate
args.version_2_with_negative, args.null_score_diff_threshold)
File "/home/clong/git/transformers/examples/utils_squad.py", line 532, in write_predictions
result = unique_id_to_result[feature.unique_id]
KeyError: 1000000000
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/opt/conda/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/opt/conda/lib/python3.6/site-packages/torch/distributed/launch.py", line 253, in <module>
main()
File "/opt/conda/lib/python3.6/site-packages/torch/distributed/launch.py", line 249, in main
cmd=cmd)
subprocess.CalledProcessError: Command '['/opt/conda/bin/python', '-u', './examples/run_squad.py', '--local_rank=5', '--model_type', 'bert', '--model_name_or_path', 'bert-large-uncased-whole-word-masking', '--do_eval', '--do_lower_case', '--train_file', '/data/data/SQUAD/train-v1.1.json', '--predict_file', '/data/data/SQUAD/dev-v1.1.json', '--learning_rate', '3e-5', '--num_train_epochs', '2', '--max_seq_length', '384', '--doc_stride', '128', '--output_dir', 'models/wwm_uncased_finetuned_squad_supp/', '--per_gpu_eval_batch_size=6', '--per_gpu_train_batch_size=6', '--save_steps', '500']' returned non-zero exit status 1.
## Expected behavior
no crashing and predictions written
## Environment
* OS: Ubuntu 18.04 in NVIDIA pytorch container
* Python version: 3.6.9 anaconda
* PyTorch version: 1.3.0 custom nvidia version
* PyTorch Transformers version (or branch): pip install
* Using GPU ? yes
* Distributed of parallel setup ? using 6 of 16 gpu's on system
* Any other relevant information:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1955/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1955/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1954 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1954/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1954/comments | https://api.github.com/repos/huggingface/transformers/issues/1954/events | https://github.com/huggingface/transformers/issues/1954 | 528,893,274 | MDU6SXNzdWU1Mjg4OTMyNzQ= | 1,954 | BertForMultipleChoice | {
"login": "apratim-mishra",
"id": 26097066,
"node_id": "MDQ6VXNlcjI2MDk3MDY2",
"avatar_url": "https://avatars.githubusercontent.com/u/26097066?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/apratim-mishra",
"html_url": "https://github.com/apratim-mishra",
"followers_url": "https://api.github.com/users/apratim-mishra/followers",
"following_url": "https://api.github.com/users/apratim-mishra/following{/other_user}",
"gists_url": "https://api.github.com/users/apratim-mishra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/apratim-mishra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/apratim-mishra/subscriptions",
"organizations_url": "https://api.github.com/users/apratim-mishra/orgs",
"repos_url": "https://api.github.com/users/apratim-mishra/repos",
"events_url": "https://api.github.com/users/apratim-mishra/events{/privacy}",
"received_events_url": "https://api.github.com/users/apratim-mishra/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi, could you try and test this on a more recent version of the library and let me know if it works?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,574 | 1,580 | 1,580 | NONE | null | ## ❓ Questions & Help
Shape Error when using BrtForMultipleChoice

Below is the model i used:

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1954/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1954/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1953 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1953/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1953/comments | https://api.github.com/repos/huggingface/transformers/issues/1953/events | https://github.com/huggingface/transformers/issues/1953 | 528,863,158 | MDU6SXNzdWU1Mjg4NjMxNTg= | 1,953 | the output type of TFBertModel is weird | {
"login": "roccqqck",
"id": 34628766,
"node_id": "MDQ6VXNlcjM0NjI4NzY2",
"avatar_url": "https://avatars.githubusercontent.com/u/34628766?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/roccqqck",
"html_url": "https://github.com/roccqqck",
"followers_url": "https://api.github.com/users/roccqqck/followers",
"following_url": "https://api.github.com/users/roccqqck/following{/other_user}",
"gists_url": "https://api.github.com/users/roccqqck/gists{/gist_id}",
"starred_url": "https://api.github.com/users/roccqqck/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/roccqqck/subscriptions",
"organizations_url": "https://api.github.com/users/roccqqck/orgs",
"repos_url": "https://api.github.com/users/roccqqck/repos",
"events_url": "https://api.github.com/users/roccqqck/events{/privacy}",
"received_events_url": "https://api.github.com/users/roccqqck/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This is because in a bert pretraining progress, there are two tasks: masked token prediction and next sentence predition . The first needs hidden state of each tokens ( shape: [batch_size, sequence_length, hidden_size]) the second needs the embedding of the whole sequence (shape : [batch_size, hidden_size] ) . \r\n\r\nAnd there is also position left for some one who want to get all the hidden state from each level inside the model ( may represent different level of abstraction besides the last one ) or the attention matrix. ",
"> This is because in a bert pretraining progress, there are two tasks: masked token prediction and next sentence predition . The first needs hidden state of each tokens ( shape: [batch_size, sequence_length, hidden_size]) the second needs the embedding of the whole sequence (shape : [batch_size, hidden_size] ) .\r\n\r\n\r\nBecause of this\r\nif I want use tf.keras to custom the layer below TFBertModel\r\nI have to add this particular line\r\nbert = bert[0]\r\n\r\n```\r\ninput_layer = Input(shape = (512,), dtype='int64') \r\nbert = TFBertModel.from_pretrained('bert-base-chinese')(input_layer)\r\n\r\nbert = bert[0] # I have to add this particular line\r\n\r\ndropout = Dropout(0.1)(bert)\r\nflat = Flatten()(dropout)\r\nclassifier = Dense(units=5)(flat) \r\nmodel = Model(inputs=input_layer, outputs=classifier)\r\nmodel.summary()\r\n```\r\n```\r\nModel: \"model\"\r\n_________________________________________________________________\r\nLayer (type) Output Shape Param # \r\n=================================================================\r\ninput_1 (InputLayer) [(None, 512)] 0 \r\n_________________________________________________________________\r\ntf_bert_model (TFBertModel) ((None, 512, 768), (None, 102267648 \r\n_________________________________________________________________\r\ndropout_37 (Dropout) (None, 512, 768) 0 \r\n_________________________________________________________________\r\nflatten (Flatten) (None, 393216) 0 \r\n_________________________________________________________________\r\ndense (Dense) (None, 5) 1966085 \r\n=================================================================\r\nTotal params: 104,233,733\r\nTrainable params: 104,233,733\r\nNon-trainable params: 0\r\n```",
"> > This is because in a bert pretraining progress, there are two tasks: masked token prediction and next sentence predition . The first needs hidden state of each tokens ( shape: [batch_size, sequence_length, hidden_size]) the second needs the embedding of the whole sequence (shape : [batch_size, hidden_size] ) .\r\n> \r\n> Because of this\r\n> if I want use tf.keras to custom the layer below TFBertModel\r\n> I have to add this particular line\r\n> bert = bert[0]\r\n> \r\n> ```\r\n> input_layer = Input(shape = (512,), dtype='int64') \r\n> bert = TFBertModel.from_pretrained('bert-base-chinese')(input_layer)\r\n> \r\n> bert = bert[0] # I have to add this particular line\r\n> \r\n> dropout = Dropout(0.1)(bert)\r\n> flat = Flatten()(dropout)\r\n> classifier = Dense(units=5)(flat) \r\n> model = Model(inputs=input_layer, outputs=classifier)\r\n> model.summary()\r\n> ```\r\n> \r\n> ```\r\n> Model: \"model\"\r\n> _________________________________________________________________\r\n> Layer (type) Output Shape Param # \r\n> =================================================================\r\n> input_1 (InputLayer) [(None, 512)] 0 \r\n> _________________________________________________________________\r\n> tf_bert_model (TFBertModel) ((None, 512, 768), (None, 102267648 \r\n> _________________________________________________________________\r\n> dropout_37 (Dropout) (None, 512, 768) 0 \r\n> _________________________________________________________________\r\n> flatten (Flatten) (None, 393216) 0 \r\n> _________________________________________________________________\r\n> dense (Dense) (None, 5) 1966085 \r\n> =================================================================\r\n> Total params: 104,233,733\r\n> Trainable params: 104,233,733\r\n> Non-trainable params: 0\r\n> ```\r\n\r\nThat's right. But for sentence level classification , I recommend you to use the embedding of whole sequence .\r\n```\r\nbert = bert[1] # instead of bert = bert[0] \r\n```\r\n\r\nJust like what the official sequence classificiation does in **TFBertForSequenceClassification** class at\r\nhttps://github.com/huggingface/transformers/blob/master/transformers/modeling_tf_bert.py",
"> ```\r\n> bert = bert[1] # instead of bert = bert[0] \r\n> ```\r\n\r\nMay I ask why?\r\nIt looks like it reduce the features of flatten layer.\r\nIt doesn't look like whole.\r\n```\r\ninput_layer = Input(shape = (512,), dtype='int64') \r\nbert = TFBertModel.from_pretrained('bert-base-chinese')(input_layer)\r\nbert = bert[1] \r\ndropout = Dropout(0.1)(bert)\r\nflat = Flatten()(dropout)\r\nclassifier = Dense(units=5)(flat) \r\nmodel = Model(inputs=input_layer, outputs=classifier)\r\nmodel.summary()\r\n```\r\n```\r\nModel: \"model_5\"\r\n_________________________________________________________________\r\nLayer (type) Output Shape Param # \r\n=================================================================\r\ninput_6 (InputLayer) [(None, 512)] 0 \r\n_________________________________________________________________\r\ntf_bert_model_5 (TFBertModel ((None, 512, 768), (None, 102267648 \r\n_________________________________________________________________\r\ndropout_225 (Dropout) (None, 768) 0 \r\n_________________________________________________________________\r\nflatten_3 (Flatten) (None, 768) 0 \r\n_________________________________________________________________\r\ndense_5 (Dense) (None, 5) 3845 \r\n=================================================================\r\nTotal params: 102,271,493\r\nTrainable params: 102,271,493\r\nNon-trainable params: 0\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"It is still a problem",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,574 | 1,586 | 1,586 | NONE | null | ```
model = TFBertModel.from_pretrained('bert-base-chinese')
model.summary()
optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08, clipnorm=1.0)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy')
model.compile(optimizer=optimizer, loss=loss, metrics=[metric])
predictions = model.predict(validation_input_ids)
print(type(predictions))
print(predictions.shape)
```
```
<class 'list'>
AttributeError: 'list' object has no attribute 'shape'
```
The type is weird.
It is a (N, 512, 768) shape numpy array inside a List.
I had to take it out from the List
```
predictions = predictions[0]
print(predictions.shape)
```
```
(8359, 512, 768)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1953/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1953/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1952 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1952/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1952/comments | https://api.github.com/repos/huggingface/transformers/issues/1952/events | https://github.com/huggingface/transformers/pull/1952 | 528,801,397 | MDExOlB1bGxSZXF1ZXN0MzQ1Nzk1ODIx | 1,952 | suggest to track repo w/ https rather than ssh | {
"login": "rlouf",
"id": 3885044,
"node_id": "MDQ6VXNlcjM4ODUwNDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3885044?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rlouf",
"html_url": "https://github.com/rlouf",
"followers_url": "https://api.github.com/users/rlouf/followers",
"following_url": "https://api.github.com/users/rlouf/following{/other_user}",
"gists_url": "https://api.github.com/users/rlouf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rlouf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rlouf/subscriptions",
"organizations_url": "https://api.github.com/users/rlouf/orgs",
"repos_url": "https://api.github.com/users/rlouf/repos",
"events_url": "https://api.github.com/users/rlouf/events{/privacy}",
"received_events_url": "https://api.github.com/users/rlouf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1952?src=pr&el=h1) Report\n> Merging [#1952](https://codecov.io/gh/huggingface/transformers/pull/1952?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8e5d84fcc1a645d3c13b8a2f64fa995637440dad?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1952?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1952 +/- ##\n======================================\n Coverage 84% 84% \n======================================\n Files 97 97 \n Lines 14340 14340 \n======================================\n Hits 12047 12047 \n Misses 2293 2293\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1952?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1952?src=pr&el=footer). Last update [8e5d84f...c6edc47](https://codecov.io/gh/huggingface/transformers/pull/1952?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,574 | 1,574 | 1,574 | CONTRIBUTOR | null | cf #1943 we tell users to track the repository via https rather than ssh (as it requires us to enable ssh). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1952/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1952/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1952",
"html_url": "https://github.com/huggingface/transformers/pull/1952",
"diff_url": "https://github.com/huggingface/transformers/pull/1952.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1952.patch",
"merged_at": 1574870549000
} |
https://api.github.com/repos/huggingface/transformers/issues/1951 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1951/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1951/comments | https://api.github.com/repos/huggingface/transformers/issues/1951/events | https://github.com/huggingface/transformers/issues/1951 | 528,798,243 | MDU6SXNzdWU1Mjg3OTgyNDM= | 1,951 | Benchmark not replicable | {
"login": "Pointy-Hat",
"id": 20556449,
"node_id": "MDQ6VXNlcjIwNTU2NDQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/20556449?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pointy-Hat",
"html_url": "https://github.com/Pointy-Hat",
"followers_url": "https://api.github.com/users/Pointy-Hat/followers",
"following_url": "https://api.github.com/users/Pointy-Hat/following{/other_user}",
"gists_url": "https://api.github.com/users/Pointy-Hat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pointy-Hat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pointy-Hat/subscriptions",
"organizations_url": "https://api.github.com/users/Pointy-Hat/orgs",
"repos_url": "https://api.github.com/users/Pointy-Hat/repos",
"events_url": "https://api.github.com/users/Pointy-Hat/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pointy-Hat/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hello @Pointy-Hat \r\n\r\nPlease tell us more, as explained [here](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md).",
"Technical data of the cluster used for computation:\r\nSystem: CentOS Linux release 7.5.1804 (Core)\r\nPython: 3.7.4\r\nPytorch: 1.3.1\r\n\r\nCode run:\r\n```\r\npython3 -m torch.distributed.launch --nproc_per_node 4 ./examples/run_glue.py \\\r\n --model_type bert \\\r\n --model_name_or_path bert-large-uncased-whole-word-masking \\\r\n --task_name MRPC \\\r\n --do_train \\\r\n --do_eval \\\r\n --do_lower_case \\\r\n --data_dir $GLUE_DIR/MRPC/ \\\r\n --max_seq_length 128 \\\r\n --per_gpu_eval_batch_size=8 \\\r\n --per_gpu_train_batch_size=8 \\\r\n --learning_rate 2e-5 \\\r\n --num_train_epochs 3.0 \\\r\n --output_dir /tmp/mrpc_single/ \\\r\n --overwrite_output_dir \\\r\n --overwrite_cache \r\n\r\n```\r\n\r\nExpected results (as per README.txt):\r\n\r\n```\r\nacc = 0.8823529411764706\r\nacc_and_f1 = 0.901702786377709\r\nf1 = 0.9210526315789473\r\n```\r\n\r\nObtained results:\r\n```\r\nacc = 0.8725490196078431\r\nacc_and_f1 = 0.888829254329469\r\nf1 = 0.9051094890510949\r\n```\r\n\r\nGLUE data obtained via their `download_glue_data.py` script, as recommended in README.",
"did you try multiple random seeds?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,574 | 1,580 | 1,580 | NONE | null | ## ❓ Questions & Help
Hello. I wanted to test if everything is allright with my downloads and so I ran the code snippet you provided in the section **Fine-tuning Bert model on the MRPC classification task** in the main README file (the only difference being the number of gpus-I use 4). However, my evaluation results are well below the ones you mention. I get
```
acc = 0.8725490196078431
acc_and_f1 = 0.888829254329469
f1 = 0.9051094890510949
```
in my terminal.
There is also no output file in the specified folder.
Do you know what could cause this?
Thanks for answer
MS | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1951/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1951/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1950 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1950/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1950/comments | https://api.github.com/repos/huggingface/transformers/issues/1950/events | https://github.com/huggingface/transformers/issues/1950 | 528,740,572 | MDU6SXNzdWU1Mjg3NDA1NzI= | 1,950 | word or sentence embedding from BERT model | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"You can use [`BertModel`](https://huggingface.co/transformers/model_doc/bert.html#transformers.BertModel), it'll return the hidden states for the input sentence.",
"Found it, thanks @bkkaggle . Just for others who are looking for the same information. \r\n\r\nUsing Pytorch:\r\n```\r\ntokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\nmodel = BertModel.from_pretrained('bert-base-uncased')\r\ninput_ids = torch.tensor(tokenizer.encode(\"Hello, my dog is cute\")).unsqueeze(0) # Batch size 1\r\noutputs = model(input_ids)\r\nlast_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple\r\n```\r\n\r\n\r\nUsing Tensorflow:\r\n```\r\nimport tensorflow as tf\r\nfrom transformers import BertTokenizer, TFBertModel\r\n\r\ntokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\nmodel = TFBertModel.from_pretrained('bert-base-uncased')\r\ninput_ids = tf.constant(tokenizer.encode(\"Hello, my dog is cute\"))[None, :] # Batch size 1\r\noutputs = model(input_ids)\r\nlast_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple\r\n```",
"This is a bit different for `...ForSequenceClassification` models. I've found that the item at `outputs[0]` are the logits and the only way to get the `hidden_states` is to set `config.output_hidden_states=True` when initializing the model. Only then was I able to get the `hidden_states` which are located at `outputs[1]`. \r\n\r\nExample:\r\n\r\n```python3\r\ninputs = {\r\n \"input_ids\": batch[0],\r\n \"attention_mask\": batch[1]\r\n}\r\n\r\noutput = bertmodel(**inputs)\r\nlogits = output[0]\r\nhidden_states = output[1]\r\n```\r\n\r\n",
"By using this code, you can obtain a PyTorch tensor of (1, N, 768) shape, where _N_ is the number of different tokens extracted from `BertTokenizer`. If you want to build the sentence vector by exploiting these N tensors, how do you do that? @engrsfi\r\n\r\n> Found it, thanks @bkkaggle . Just for others who are looking for the same information.\r\n> \r\n> Using Pytorch:\r\n> \r\n> ```\r\n> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\n> model = BertModel.from_pretrained('bert-base-uncased')\r\n> input_ids = torch.tensor(tokenizer.encode(\"Hello, my dog is cute\")).unsqueeze(0) # Batch size 1\r\n> outputs = model(input_ids)\r\n> last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple\r\n> ```\r\n> \r\n> Using Tensorflow:\r\n> \r\n> ```\r\n> import tensorflow as tf\r\n> from transformers import BertTokenizer, TFBertModel\r\n> \r\n> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\n> model = TFBertModel.from_pretrained('bert-base-uncased')\r\n> input_ids = tf.constant(tokenizer.encode(\"Hello, my dog is cute\"))[None, :] # Batch size 1\r\n> outputs = model(input_ids)\r\n> last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple\r\n> ```",
"> This is a bit different for `...ForSequenceClassification` models. I've found that the item at `outputs[0]` are the logits and the only way to get the `hidden_states` is to set `config.output_hidden_states=True` when initializing the model. Only then was I able to get the `hidden_states` which are located at `outputs[1]`.\r\n> \r\n> Example:\r\n> \r\n> ```python\r\n> inputs = {\r\n> \"input_ids\": batch[0],\r\n> \"attention_mask\": batch[1]\r\n> }\r\n> \r\n> output = bertmodel(**inputs)\r\n> logits = output[0]\r\n> hidden_states = output[1]\r\n> ```\r\n\r\nI am interested in the last hidden states which are seen as kind of embeddings. I think you are referring to all hidden states including the output of the embedding layer.\r\n\r\n```\r\n\"**hidden_states**: (`optional`, returned when ``config.output_hidden_states=True``)\r\n list of ``torch.FloatTensor`` (one for the output of each layer + the output of the embeddings)\r\n of shape ``(batch_size, sequence_length, hidden_size)``:\r\n Hidden-states of the model at the output of each layer plus the initial embedding outputs\r\n```.",
"> By using this code, you can obtain a PyTorch tensor of (1, N, 768) shape, where _N_ is the number of different tokens extracted from `BertTokenizer`. If you want to build the sentence vector by exploiting these N tensors, how do you do that? @engrsfi\r\n> \r\n> > Found it, thanks @bkkaggle . Just for others who are looking for the same information.\r\n> > Using Pytorch:\r\n> > ```\r\n> > tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\n> > model = BertModel.from_pretrained('bert-base-uncased')\r\n> > input_ids = torch.tensor(tokenizer.encode(\"Hello, my dog is cute\")).unsqueeze(0) # Batch size 1\r\n> > outputs = model(input_ids)\r\n> > last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple\r\n> > ```\r\n> > \r\n> > \r\n> > Using Tensorflow:\r\n> > ```\r\n> > import tensorflow as tf\r\n> > from transformers import BertTokenizer, TFBertModel\r\n> > \r\n> > tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\n> > model = TFBertModel.from_pretrained('bert-base-uncased')\r\n> > input_ids = tf.constant(tokenizer.encode(\"Hello, my dog is cute\"))[None, :] # Batch size 1\r\n> > outputs = model(input_ids)\r\n> > last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple\r\n> > ```\r\n\r\nYou can take an average of them. However, I think the embeddings at first position [CLS] are considered a kind of sentence vector because only those are fed to a further classifier if any for downstream tasks. Disclaimer: I am not sure about it.",
"> > This is a bit different for `...ForSequenceClassification` models. I've found that the item at `outputs[0]` are the logits and the only way to get the `hidden_states` is to set `config.output_hidden_states=True` when initializing the model. Only then was I able to get the `hidden_states` which are located at `outputs[1]`.\r\n> > Example:\r\n> > ```python\r\n> > inputs = {\r\n> > \"input_ids\": batch[0],\r\n> > \"attention_mask\": batch[1]\r\n> > }\r\n> > \r\n> > output = bertmodel(**inputs)\r\n> > logits = output[0]\r\n> > hidden_states = output[1]\r\n> > ```\r\n> \r\n> I am interested in the last hidden states which are seen as kind of embeddings. I think you are referring to all hidden states including the output of the embedding layer.\r\n> \r\n> ```\r\n> \"**hidden_states**: (`optional`, returned when ``config.output_hidden_states=True``)\r\n> list of ``torch.FloatTensor`` (one for the output of each layer + the output of the embeddings)\r\n> of shape ``(batch_size, sequence_length, hidden_size)``:\r\n> Hidden-states of the model at the output of each layer plus the initial embedding outputs\r\n> ```.\r\n> ```\r\n\r\nShould be as simple as grabbing the last element in the list:\r\n\r\n```python3\r\nlast_layer = hidden_states[-1]\r\n```\r\n",
"@maxzzze According to the documentation, one can get the last hidden states directly without setting this flag to True. See below.\r\nhttps://huggingface.co/transformers/_modules/transformers/modeling_bert.html#BertModel\r\n\r\n```\r\nOutputs: `Tuple` comprising various elements depending on the configuration (config) and inputs:\r\n **last_hidden_state**: ``torch.FloatTensor`` of shape ``(batch_size, sequence_length, hidden_size)``\r\n Sequence of hidden-states at the output of the last layer of the model.\r\n **pooler_output**: ``torch.FloatTensor`` of shape ``(batch_size, hidden_size)``\r\n Last layer hidden-state of the first token of the sequence (classification token)\r\n further processed by a Linear layer and a Tanh activation function. The Linear\r\n layer weights are trained from the next sentence prediction (classification)\r\n objective during Bert pretraining. This output is usually *not* a good summary\r\n of the semantic content of the input, you're often better with averaging or pooling\r\n the sequence of hidden-states for the whole input sequence.\r\n **hidden_states**: (`optional`, returned when ``config.output_hidden_states=True``)\r\n list of ``torch.FloatTensor`` (one for the output of each layer + the output of the embeddings)\r\n of shape ``(batch_size, sequence_length, hidden_size)``:\r\n Hidden-states of the model at the output of each layer plus the initial embedding outputs.\r\n **attentions**: (`optional`, returned when ``config.output_attentions=True``)\r\n list of ``torch.FloatTensor`` (one for each layer) of shape ``(batch_size, num_heads, sequence_length, sequence_length)``:\r\n Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.\r\n```\r\n\r\nBTW, for me, the shape of hidden_states in the below code is `(batch_size, 768)` when I set this Flag to True, not sure if I can extract last hidden states from that.\r\n\r\n```\r\noutput = bertmodel(**inputs)\r\nlogits = output[0]\r\nhidden_states = output[1]\r\n```",
"> @maxzzze According to the documentation, one can get the last hidden states directly without setting this flag to True. See below.\r\n> \r\n> ```\r\n> Outputs: `Tuple` comprising various elements depending on the configuration (config) and inputs:\r\n> **last_hidden_state**: ``torch.FloatTensor`` of shape ``(batch_size, sequence_length, hidden_size)``\r\n> Sequence of hidden-states at the output of the last layer of the model.\r\n> **pooler_output**: ``torch.FloatTensor`` of shape ``(batch_size, hidden_size)``\r\n> Last layer hidden-state of the first token of the sequence (classification token)\r\n> further processed by a Linear layer and a Tanh activation function. The Linear\r\n> layer weights are trained from the next sentence prediction (classification)\r\n> objective during Bert pretraining. This output is usually *not* a good summary\r\n> of the semantic content of the input, you're often better with averaging or pooling\r\n> the sequence of hidden-states for the whole input sequence.\r\n> **hidden_states**: (`optional`, returned when ``config.output_hidden_states=True``)\r\n> list of ``torch.FloatTensor`` (one for the output of each layer + the output of the embeddings)\r\n> of shape ``(batch_size, sequence_length, hidden_size)``:\r\n> Hidden-states of the model at the output of each layer plus the initial embedding outputs.\r\n> **attentions**: (`optional`, returned when ``config.output_attentions=True``)\r\n> list of ``torch.FloatTensor`` (one for each layer) of shape ``(batch_size, num_heads, sequence_length, sequence_length)``:\r\n> Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.\r\n> ```\r\n> \r\n> BTW, for me, the shape of hidden_states in the below code is `(batch_size, 768)` whereas it should be `(batch_size, num_heads, sequence_length, sequence_length)`.\r\n> \r\n> ```\r\n> output = bertmodel(**inputs)\r\n> logits = output[0]\r\n> hidden_states = output[1]\r\n> ```\r\n\r\nI believe your comment is in reference to the standard models, but its hard to tell without a link. Can you link where to where in the documentation the pasted doc string is from? \r\n\r\nI dont know if you saw my original comment but I was providing an example for how to get `hidden_states` from the `..ForSequenceClassification` models, not the standard ones. The `..ForSequenceClassification` models do not output `hidden_states` by default: https://huggingface.co/transformers/model_doc/bert.html#bertforsequenceclassification",
"Sorry, I missed that part :) I am referring to the standard BERTMODEL. Doc link:\r\nhttps://huggingface.co/transformers/model_doc/bert.html#bertmodel\r\n\r\n> > @maxzzze According to the documentation, one can get the last hidden states directly without setting this flag to True. See below.\r\n> > ```\r\n> > Outputs: `Tuple` comprising various elements depending on the configuration (config) and inputs:\r\n> > **last_hidden_state**: ``torch.FloatTensor`` of shape ``(batch_size, sequence_length, hidden_size)``\r\n> > Sequence of hidden-states at the output of the last layer of the model.\r\n> > **pooler_output**: ``torch.FloatTensor`` of shape ``(batch_size, hidden_size)``\r\n> > Last layer hidden-state of the first token of the sequence (classification token)\r\n> > further processed by a Linear layer and a Tanh activation function. The Linear\r\n> > layer weights are trained from the next sentence prediction (classification)\r\n> > objective during Bert pretraining. This output is usually *not* a good summary\r\n> > of the semantic content of the input, you're often better with averaging or pooling\r\n> > the sequence of hidden-states for the whole input sequence.\r\n> > **hidden_states**: (`optional`, returned when ``config.output_hidden_states=True``)\r\n> > list of ``torch.FloatTensor`` (one for the output of each layer + the output of the embeddings)\r\n> > of shape ``(batch_size, sequence_length, hidden_size)``:\r\n> > Hidden-states of the model at the output of each layer plus the initial embedding outputs.\r\n> > **attentions**: (`optional`, returned when ``config.output_attentions=True``)\r\n> > list of ``torch.FloatTensor`` (one for each layer) of shape ``(batch_size, num_heads, sequence_length, sequence_length)``:\r\n> > Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.\r\n> > ```\r\n> > \r\n> > \r\n> > BTW, for me, the shape of hidden_states in the below code is `(batch_size, 768)` whereas it should be `(batch_size, num_heads, sequence_length, sequence_length)`.\r\n> > ```\r\n> > output = bertmodel(**inputs)\r\n> > logits = output[0]\r\n> > hidden_states = output[1]\r\n> > ```\r\n> \r\n> I believe your comment is in reference to the standard models, but its hard to tell without a link. Can you link where to where in the documentation the pasted doc string is from?\r\n> \r\n> I dont know if you saw my original comment but I was providing an example for how to get `hidden_states` from the `..ForSequenceClassification` models, not the standard ones. The `..ForSequenceClassification` models do not output `hidden_states` by default: https://huggingface.co/transformers/model_doc/bert.html#bertforsequenceclassification\r\n\r\n",
"@engrsfi @maxzzze @bkkaggle \r\nPlease, look [here](https://jalammar.github.io/a-visual-guide-to-using-bert-for-the-first-time/). I hope it can help :)",
"@TheEdoardo93 is this example taking the first element in each of the `hidden_states`?",
"@engrsfi You can process the hidden states of BERT (all layers or only the last layer) in whatever way you want.\r\n\r\nMost people usually only take the hidden states of the [CLS] token of the last layer - using the hidden states for all tokens or from multiple layers doesn't usually help you that much.\r\n\r\nIf you want to get the embeddings for classification, just do something like:\r\n\r\n```\r\ninput_sentence = torch.tensor(tokenizer.encode(\"[CLS] My sentence\")).unsqueeze(0)\r\nout = model(input_sentence)\r\nembeddings_of_last_layer = out[0]\r\ncls_embeddings = embeddings_of_last_layer[0]\r\n```",
"> @engrsfi You can process the hidden states of BERT (all layers or only the last layer) in whatever way you want.\r\n> \r\n> Most people usually only take the hidden states of the [CLS] token of the last layer - using the hidden states for all tokens or from multiple layers doesn't usually help you that much.\r\n> \r\n> If you want to get the embeddings for classification, just do something like:\r\n> \r\n> ```\r\n> input_sentence = torch.tensor(tokenizer.encode(\"[CLS] My sentence\")).unsqueeze(0)\r\n> out = model(input_sentence)\r\n> embeddings_of_last_layer = out[0]\r\n> cls_embeddings = embeddings_of_last_layer[0]\r\n> ```\r\n\r\nDo you have any reference as to \"people usually only take the hidden states of the [CLS] token of the last layer\"?",
"Here are a few related links: [1](https://github.com/google-research/bert/issues/196), [2](https://github.com/hanxiao/bert-as-service#q-what-are-the-available-pooling-strategies), [3](https://yashuseth.blog/2019/06/12/bert-explained-faqs-understand-bert-working/)\r\n\r\nThe [CLS] token isn't the only (or necessarily the best) way to finetune, but it is the easiest and is Bert's default",
"There is some clarification about the use of the last hidden states in the BERT Paper.\r\n According to the paper, the last hidden state for [CLS] is mainly used for classification tasks and the last hidden states for all tokens are used for token level tasks such as sequence tagging or question answering. \r\n\r\nFrom the paper:\r\n\r\n> At the output, the token representations are fed into an output layer for token level tasks, such as sequence tagging or question answering, and the [CLS] representation is fed into an output layer for classification, such as entailment or sentiment analysis.\r\n\r\nReference: \r\nBERT: Pre-training of Deep Bidirectional Transformers for Language Understanding (https://arxiv.org/pdf/1810.04805.pdf)",
"What about ALBERT? The output of the last hidden state isn't the same of the embedding because in the doc they say that the embedding have a size of 128 for every model (https://arxiv.org/pdf/1909.11942.pdf page 6).\r\nBut I'm not sure if the 128-embedding referenced in the table is something internally used to represent words or the final word embedding.",
"> Found it, thanks @bkkaggle . Just for others who are looking for the same information.\r\n> \r\n> Using Pytorch:\r\n> \r\n> ```\r\n> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\n> model = BertModel.from_pretrained('bert-base-uncased')\r\n> input_ids = torch.tensor(tokenizer.encode(\"Hello, my dog is cute\")).unsqueeze(0) # Batch size 1\r\n> outputs = model(input_ids)\r\n> last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple\r\n> ```\r\n> \r\n> Using Tensorflow:\r\n> \r\n> ```\r\n> import tensorflow as tf\r\n> from transformers import BertTokenizer, TFBertModel\r\n> \r\n> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\n> model = TFBertModel.from_pretrained('bert-base-uncased')\r\n> input_ids = tf.constant(tokenizer.encode(\"Hello, my dog is cute\"))[None, :] # Batch size 1\r\n> outputs = model(input_ids)\r\n> last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple\r\n> ```\r\n\r\nif batch size is N, how to convert?",
"> What about ALBERT? The output of the last hidden state isn't the same of the embedding because in the doc they say that the embedding have a size of 128 for every model (https://arxiv.org/pdf/1909.11942.pdf page 6).\r\n> But I'm not sure if the 128-embedding referenced in the table is something internally used to represent words or the final word embedding.\r\n\r\n128 is used internally by Albert. The output of the model (last hidden state) is your actual word embeddings. In order to understand this better, you should read the following blog from Google.\r\nhttps://ai.googleblog.com/2019/12/albert-lite-bert-for-self-supervised.html\r\n\r\nQuote: \r\n\"The key to optimizing performance, captured in the design of ALBERT, is to allocate the model’s capacity more efficiently. Input-level embeddings (words, sub-tokens, etc.) need to learn context-independent representations, a representation for the word “bank”, for example. In contrast, hidden-layer embeddings need to refine that into context-dependent representations, e.g., a representation for “bank” in the context of financial transactions, and a different representation for “bank” in the context of river-flow management.\r\n\r\n**This is achieved by factorization of the embedding parametrization — the embedding matrix is split between input-level embeddings with a relatively-low dimension (e.g., 128), while the hidden-layer embeddings use higher dimensionalities (768 as in the BERT case, or more).** With this step alone, ALBERT achieves an 80% reduction in the parameters of the projection block, at the expense of only a minor drop in performance — 80.3 SQuAD2.0 score, down from 80.4; or 67.9 on RACE, down from 68.2 — with all other conditions the same as for BERT.\"\r\n",
"> > Found it, thanks @bkkaggle . Just for others who are looking for the same information.\r\n> > Using Pytorch:\r\n> > ```\r\n> > tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\n> > model = BertModel.from_pretrained('bert-base-uncased')\r\n> > input_ids = torch.tensor(tokenizer.encode(\"Hello, my dog is cute\")).unsqueeze(0) # Batch size 1\r\n> > outputs = model(input_ids)\r\n> > last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple\r\n> > ```\r\n> > \r\n> > \r\n> > Using Tensorflow:\r\n> > ```\r\n> > import tensorflow as tf\r\n> > from transformers import BertTokenizer, TFBertModel\r\n> > \r\n> > tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\n> > model = TFBertModel.from_pretrained('bert-base-uncased')\r\n> > input_ids = tf.constant(tokenizer.encode(\"Hello, my dog is cute\"))[None, :] # Batch size 1\r\n> > outputs = model(input_ids)\r\n> > last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple\r\n> > ```\r\n> \r\n> if batch size is N, how to convert?\r\n\r\nIf I understand you correctly, you are asking for how to get the last hidden states for all entries in a batch of size N. If that's the case, then here is the explanation. \r\n\r\nYour model expect input of the following shape:\r\n\r\n`(batch_size, sequence_length)`\r\n\r\nand returns last hidden states of the following shape:\r\n\r\n`(batch_size, sequence_length, hidden_size)`\r\n\r\nYou can just go through the last hidden states to get the individual last hidden state for each input in the batch size of N.\r\n\r\nReference:\r\nhttps://huggingface.co/transformers/model_doc/bert.html",
"@engrsfi : What if I want to use bert embedding vector of each token as an input to an LSTM network? Can I get the embedding of each token of the sentence from the last hidden layer of the bert model? In this case I think I can't just use the embedding for [CLS] token as I need word embedding of each token?\r\n I used the code below to get bert's word embedding for all tokens of my sentences. I padded all my sentences to have maximum length of 80 and also used attention mask to ignore padded elements. in this case the shape of last_hidden_states element is of size (batch_size ,80 ,768). However, when I see my embeddings, I can see that embedding vectors for padded elements are not the same? like I have a vector of size 768 for each token of the sentence(most of them are padded tokens). but vectors for padded element are not equal. is it natural?\r\n\r\nimport tensorflow as tf\r\nimport numpy as np \r\nfrom transformers import BertTokenizer, TFBertModel\r\n\r\nbert_model = TFBertModel.from_pretrained(\"bert-base-uncased\")\r\nbert_tokenizer = BertTokenizer.from_pretrained(\"bert-base-uncased\")\r\ntokenized = x_train['token'].apply((lambda x: bert_tokenizer.encode(x, add_special_tokens=True, max_length=80)))\r\npadded = np.array([i + [0]*(80-len(i)) for i in tokenized.values])\r\nattention_mask = np.where(padded != 0, 1, 0)\r\ninput_ids = tf.constant(padded) \r\nattention_mask = tf.constant(attention_mask)\r\noutput= bert_model(input_ids, attention_mask=attention_mask)\r\nlast_hidden_states=output[0]",
"> How can I extract embeddings for a sentence or a set of words directly from pre-trained models (Standard BERT)? For example, I am using Spacy for this purpose at the moment where I can do it as follows:\r\n> \r\n> sentence vector:\r\n> `sentence_vector = bert_model(\"This is an apple\").vector`\r\n> \r\n> word_vectors:\r\n> \r\n> ```\r\n> words = bert_model(\"This is an apple\")\r\n> word_vectors = [w.vector for w in words]\r\n> ```\r\n> \r\n> I am wondering if this is possible directly with huggingface pre-trained models (especially BERT).\r\n\r\nHi, could I ask how you would use Spacy to do this? Is there a link? Thanks a lot. ",
"> > How can I extract embeddings for a sentence or a set of words directly from pre-trained models (Standard BERT)? For example, I am using Spacy for this purpose at the moment where I can do it as follows:\r\n> > sentence vector:\r\n> > `sentence_vector = bert_model(\"This is an apple\").vector`\r\n> > word_vectors:\r\n> > ```\r\n> > words = bert_model(\"This is an apple\")\r\n> > word_vectors = [w.vector for w in words]\r\n> > ```\r\n> > \r\n> > \r\n> > I am wondering if this is possible directly with huggingface pre-trained models (especially BERT).\r\n> \r\n> Hi, could I ask how you would use Spacy to do this? Is there a link? Thanks a lot.\r\n\r\nHere is the link:\r\nhttps://spacy.io/usage/vectors-similarity",
"> @engrsfi You can process the hidden states of BERT (all layers or only the last layer) in whatever way you want.\r\n> \r\n> Most people usually only take the hidden states of the [CLS] token of the last layer - using the hidden states for all tokens or from multiple layers doesn't usually help you that much.\r\n> \r\n> If you want to get the embeddings for classification, just do something like:\r\n> \r\n> ```\r\n> input_sentence = torch.tensor(tokenizer.encode(\"[CLS] My sentence\")).unsqueeze(0)\r\n> out = model(input_sentence)\r\n> embeddings_of_last_layer = out[0]\r\n> cls_embeddings = embeddings_of_last_layer[0]\r\n> ```\r\n\r\nThank you for sharing the code. It really helped in understanding tokenization in BERT. I ran this and had a minor problem. Shouldn't it be: \r\n\r\n```cls_embeddings = embeddings_of_last_layer[0][0]```? This is because embeddings_of_last_layer is of the dimension: 1*#tokens*#hidden-units. Then, since [CLS] is the first token (and usually have 101 as id), we want embedding corresponding to just [CLS]. ```embeddings_of_last_layer[0]``` is of shape #tokens*#hidden-units and contains embeddings of all the tokens.",
"@sahand91 \r\npooled_output, sequence_output = bert_model(input_)\r\npooled_output.shape = (1, 768), one vector on 768 entries (represent the whole sentence)\r\nsequence_output.shape = (batch_size, max_len, dim), (1, 256, 768) bs = 1, n_tokens = 256\r\nsequence output gives the vector for each token of the sentence. \r\n\r\nI have used the sequence output for classification task like sentiment analysis. As the paper mentions that the pooled output is not a good representation of the whole sentence so we use the sequence output and feed it further in a CNN or LSTM. \r\n\r\nSo I don't see any problem in using the sequence output for classification tasks as we get to see the actual vector representation of the word say \"bank\" in both contexts \"commercial\" and \"location\" (bank of a river) ",
"> > @engrsfi You can process the hidden states of BERT (all layers or only the last layer) in whatever way you want.\r\n> > Most people usually only take the hidden states of the [CLS] token of the last layer - using the hidden states for all tokens or from multiple layers doesn't usually help you that much.\r\n> > If you want to get the embeddings for classification, just do something like:\r\n> > ```\r\n> > input_sentence = torch.tensor(tokenizer.encode(\"[CLS] My sentence\")).unsqueeze(0)\r\n> > out = model(input_sentence)\r\n> > embeddings_of_last_layer = out[0]\r\n> > cls_embeddings = embeddings_of_last_layer[0]\r\n> > ```\r\n> \r\n> Thank you for sharing the code. It really helped in understanding tokenization in BERT. I ran this and had a minor problem. Shouldn't it be:\r\n> \r\n> `cls_embeddings = embeddings_of_last_layer[0][0]`? This is because embeddings_of_last_layer is of the dimension: 1*#tokens*#hidden-units. Then, since [CLS] is the first token (and usually have 101 as id), we want embedding corresponding to just [CLS]. `embeddings_of_last_layer[0]` is of shape #tokens*#hidden-units and contains embeddings of all the tokens.\r\n\r\nYes i think the same. @sumitsidana \r\nembeddings_of_last_layer[0][0].shape\r\nOut[179]: torch.Size([144]) # where 144 in my case is the hidden_size\r\n\r\nAnyone confirming that embeddings_of_last_layer[0][0] is the embedding related to CLS token for each sequence?",
"> > > @engrsfi You can process the hidden states of BERT (all layers or only the last layer) in whatever way you want.\r\n> > > Most people usually only take the hidden states of the [CLS] token of the last layer - using the hidden states for all tokens or from multiple layers doesn't usually help you that much.\r\n> > > If you want to get the embeddings for classification, just do something like:\r\n> > > ```\r\n> > > input_sentence = torch.tensor(tokenizer.encode(\"[CLS] My sentence\")).unsqueeze(0)\r\n> > > out = model(input_sentence)\r\n> > > embeddings_of_last_layer = out[0]\r\n> > > cls_embeddings = embeddings_of_last_layer[0]\r\n> > > ```\r\n> > \r\n> > \r\n> > Thank you for sharing the code. It really helped in understanding tokenization in BERT. I ran this and had a minor problem. Shouldn't it be:\r\n> > `cls_embeddings = embeddings_of_last_layer[0][0]`? This is because embeddings_of_last_layer is of the dimension: 1*#tokens*#hidden-units. Then, since [CLS] is the first token (and usually have 101 as id), we want embedding corresponding to just [CLS]. `embeddings_of_last_layer[0]` is of shape #tokens*#hidden-units and contains embeddings of all the tokens.\r\n> \r\n> Yes i think the same. @sumitsidana\r\n> embeddings_of_last_layer[0][0].shape\r\n> Out[179]: torch.Size([144]) # where 144 in my case is the hidden_size\r\n> \r\n> Anyone confirming that embeddings_of_last_layer[0][0] is the embedding related to CLS token for each sequence?\r\n\r\nYes it is. but it is only for first batch. you will have to loop through all the batches and get the first element (CLS) for each sentence.",
"Yes gotcha. Thanks",
"> This is a bit different for `...ForSequenceClassification` models. I've found that the item at `outputs[0]` are the logits and the only way to get the `hidden_states` is to set `config.output_hidden_states=True` when initializing the model. Only then was I able to get the `hidden_states` which are located at `outputs[1]`.\r\n> \r\n> Example:\r\n> \r\n> ```python\r\n> inputs = {\r\n> \"input_ids\": batch[0],\r\n> \"attention_mask\": batch[1]\r\n> }\r\n> \r\n> output = bertmodel(**inputs)\r\n> logits = output[0]\r\n> hidden_states = output[1]\r\n> ```\r\nlogtis = output[0] means the word embedding. So, does hidden_states = output[1] means the sentence level embedding ?",
"> Found it, thanks @bkkaggle . Just for others who are looking for the same information.\r\n> \r\n> Using Pytorch:\r\n> \r\n> ```\r\n> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\n> model = BertModel.from_pretrained('bert-base-uncased')\r\n> input_ids = torch.tensor(tokenizer.encode(\"Hello, my dog is cute\")).unsqueeze(0) # Batch size 1\r\n> outputs = model(input_ids)\r\n> last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple\r\n> ```\r\n> \r\n> Using Tensorflow:\r\n> \r\n> ```\r\n> import tensorflow as tf\r\n> from transformers import BertTokenizer, TFBertModel\r\n> \r\n> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\n> model = TFBertModel.from_pretrained('bert-base-uncased')\r\n> input_ids = tf.constant(tokenizer.encode(\"Hello, my dog is cute\"))[None, :] # Batch size 1\r\n> outputs = model(input_ids)\r\n> last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple\r\n> ```\r\n\r\noutputs[0] is sentence embedding for \"Hello, my dog is cute\" right?\r\nthen what is outputs[1]?"
] | 1,574 | 1,613 | 1,596 | NONE | null | How can I extract embeddings for a sentence or a set of words directly from pre-trained models (Standard BERT)? For example, I am using Spacy for this purpose at the moment where I can do it as follows:
sentence vector:
`sentence_vector = bert_model("This is an apple").vector`
word_vectors:
```
words = bert_model("This is an apple")
word_vectors = [w.vector for w in words]
```
I am wondering if this is possible directly with huggingface pre-trained models (especially BERT).
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1950/reactions",
"total_count": 24,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 17,
"rocket": 0,
"eyes": 7
} | https://api.github.com/repos/huggingface/transformers/issues/1950/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1949 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1949/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1949/comments | https://api.github.com/repos/huggingface/transformers/issues/1949/events | https://github.com/huggingface/transformers/issues/1949 | 528,651,027 | MDU6SXNzdWU1Mjg2NTEwMjc= | 1,949 | Can i train my own text corpus | {
"login": "shashankMadan-designEsthetics",
"id": 45225143,
"node_id": "MDQ6VXNlcjQ1MjI1MTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/45225143?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shashankMadan-designEsthetics",
"html_url": "https://github.com/shashankMadan-designEsthetics",
"followers_url": "https://api.github.com/users/shashankMadan-designEsthetics/followers",
"following_url": "https://api.github.com/users/shashankMadan-designEsthetics/following{/other_user}",
"gists_url": "https://api.github.com/users/shashankMadan-designEsthetics/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shashankMadan-designEsthetics/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shashankMadan-designEsthetics/subscriptions",
"organizations_url": "https://api.github.com/users/shashankMadan-designEsthetics/orgs",
"repos_url": "https://api.github.com/users/shashankMadan-designEsthetics/repos",
"events_url": "https://api.github.com/users/shashankMadan-designEsthetics/events{/privacy}",
"received_events_url": "https://api.github.com/users/shashankMadan-designEsthetics/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You can initialize the weights of your model as the ones of e.g. BERT, and after that you can fine-tune your model with _your own data_ (**transfer learning**). Please see [here](https://github.com/huggingface/transformers/blob/master/examples/run_lm_finetuning.py) for fine-tuning for a particular task.\r\n\r\nHave i answered to you?\r\n\r\n> ## Questions & Help\r\n> I get the idea of using `from_pretrained` but Can i train my own text corpus and then get the weights? If So how?",
"> You can initialize the weights of your model as the ones of e.g. BERT, and after that you can fine-tune your model with _your own data_ (**transfer learning**). Please see [here](https://github.com/huggingface/transformers/blob/master/examples/run_lm_finetuning.py) for fine-tuning for a particular task.\r\n> \r\n> Have i answered to you?\r\n> \r\n> > ## Questions & Help\r\n> > I get the idea of using `from_pretrained` but Can i train my own text corpus and then get the weights? If So how?\r\n\r\nHey Thanks! I'll go through the source code you referred, also just wanted to confirm the same goes with gpt-2 model right?",
"Yeah, this particular script works with OpenAI GPT-2 too.\r\nIn general, the most part of the code is the same even changing the model chosen.\r\n\r\n> > You can initialize the weights of your model as the ones of e.g. BERT, and after that you can fine-tune your model with _your own data_ (**transfer learning**). Please see [here](https://github.com/huggingface/transformers/blob/master/examples/run_lm_finetuning.py) for fine-tuning for a particular task.\r\n> > Have i answered to you?\r\n> > > ## Questions & Help\r\n> > > I get the idea of using `from_pretrained` but Can i train my own text corpus and then get the weights? If So how?\r\n> \r\n> Hey Thanks! I'll go through the source code you referred, also just wanted to confirm the same goes with gpt-2 model right?",
"Thanks for the quick response, i'll close this one then."
] | 1,574 | 1,574 | 1,574 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
I get the idea of using `from_pretrained` but Can i train my own text corpus and then get the weights? If So how? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1949/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1949/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1948 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1948/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1948/comments | https://api.github.com/repos/huggingface/transformers/issues/1948/events | https://github.com/huggingface/transformers/issues/1948 | 528,633,147 | MDU6SXNzdWU1Mjg2MzMxNDc= | 1,948 | Should I use `attention_mask`? | {
"login": "ShaneTian",
"id": 42370681,
"node_id": "MDQ6VXNlcjQyMzcwNjgx",
"avatar_url": "https://avatars.githubusercontent.com/u/42370681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ShaneTian",
"html_url": "https://github.com/ShaneTian",
"followers_url": "https://api.github.com/users/ShaneTian/followers",
"following_url": "https://api.github.com/users/ShaneTian/following{/other_user}",
"gists_url": "https://api.github.com/users/ShaneTian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ShaneTian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ShaneTian/subscriptions",
"organizations_url": "https://api.github.com/users/ShaneTian/orgs",
"repos_url": "https://api.github.com/users/ShaneTian/repos",
"events_url": "https://api.github.com/users/ShaneTian/events{/privacy}",
"received_events_url": "https://api.github.com/users/ShaneTian/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi,\r\n\r\nThe attention mask is useful when there are padding indices on which you do not want to perform attention. You should use it when you're using padding, which should only happen when having a batch size superior to one with different sized sequences in those batches.",
"> Hi,\r\n> \r\n> The attention mask is useful when there are padding indices on which you do not want to perform attention. You should use it when you're using padding, which should only happen when having a batch size superior to one with different sized sequences in those batches.\r\n\r\nThanks for your reply! @LysandreJik \r\nSo, you mean I should use it when I use batch data and there are different sized sequences in batch, otherwise I shouldn’t. \r\nBut why the official demo is ok? I guess it will implement PAD again no matter what I have already implemented PAD manually or not? Right?",
"> So, you mean I should use it when I use batch data and there are different sized sequences in batch, otherwise I shouldn’t.\r\n\r\nExactly. You can use them, but you don't need to. You probably shouldn't because of the performance cost.\r\n\r\n> But why the official demo is ok? I guess it will implement PAD again no matter what I have already implemented PAD manually or not? Right?\r\n\r\nIt is ok because there is only one sentence.",
"> > So, you mean I should use it when I use batch data and there are different sized sequences in batch, otherwise I shouldn’t.\r\n> \r\n> Exactly. You can use them, but you don't need to. You probably shouldn't because of the performance cost.\r\n> \r\n> > But why the official demo is ok? I guess it will implement PAD again no matter what I have already implemented PAD manually or not? Right?\r\n> \r\n> It is ok because there is only one sentence.\r\n\r\nThank you! @rlouf \r\n1. I think those tokens that have been padded should not be paid attention. So I don't know why I don't need to.\r\n2. The most models have a fixed length(max_length), the sentences should pad to it before feed. I mean why the official demo doesn't pad.",
"> 1. I think those tokens that have been padded should not be paid attention. So I don't know why I don't need to.\r\n\r\nSorry, now I re-read my response I realize it was not very clear. I meant that if they are the same size you can still pad, but you'd hit the performance. If they are not, then you should absolutely pad and pass the appropriate attention mask.\r\n\r\n> 2. The most models have a fixed length(max_length), the sentences should pad to it before feed. I mean why the official demo doesn't pad.\r\n\r\nYes but sentences don't all have the same length:\r\n\r\n```\r\n// No need to pad this\r\n[[1, 2, 3],\r\n[5, 6, 7]]\r\n```\r\n\r\nBUT\r\n\r\n```\r\n// Here you should pad\r\n[[1, 2, 3, pad_token_id],\r\n[5, 6, 7, 8]]\r\n```",
"Thank you very much! I got it. @rlouf "
] | 1,574 | 1,575 | 1,575 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
#### My Target
1. Transfer sentences to `ids`
2. Padding `ids` when
3. Encode `ids` to vector(`last_hidden_states`)
4. Put this vector to my own downstream model.
#### My code
```py
original_text = "Hello world!"
ids = model.encode(original_text)
padding_mask = [1] * len(ids) # Padding mask
while ids < max_length:
ids.append(0) # Padding
padding_mask.append(0) # Mask using 0
# === Use `attention_mask` ===
outputs = model(ids, attention_mask=padding_mask)
last_hidden_states = outputs[0] # Get vector such as '<CLS>'
```
#### But...
In official demo code, **it does not use padding mask**:
```py
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained('bert-base-uncased')
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0) # Batch size 1
# === Not use `attention_mask` ===
outputs = model(input_ids)
last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple
```
So, why? Should I use `attention_mask`?
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1948/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
} | https://api.github.com/repos/huggingface/transformers/issues/1948/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1947 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1947/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1947/comments | https://api.github.com/repos/huggingface/transformers/issues/1947/events | https://github.com/huggingface/transformers/issues/1947 | 528,597,115 | MDU6SXNzdWU1Mjg1OTcxMTU= | 1,947 | Expected object of scalar type Byte but got scalar type Bool for argument #2 'mask' | {
"login": "bigzhouj",
"id": 29719942,
"node_id": "MDQ6VXNlcjI5NzE5OTQy",
"avatar_url": "https://avatars.githubusercontent.com/u/29719942?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bigzhouj",
"html_url": "https://github.com/bigzhouj",
"followers_url": "https://api.github.com/users/bigzhouj/followers",
"following_url": "https://api.github.com/users/bigzhouj/following{/other_user}",
"gists_url": "https://api.github.com/users/bigzhouj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bigzhouj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bigzhouj/subscriptions",
"organizations_url": "https://api.github.com/users/bigzhouj/orgs",
"repos_url": "https://api.github.com/users/bigzhouj/repos",
"events_url": "https://api.github.com/users/bigzhouj/events{/privacy}",
"received_events_url": "https://api.github.com/users/bigzhouj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"in probability_matrix.masked_fill_(torch.tensor(special_tokens_mask, dtype=torch.bool), value=0.0)",
"Please, describe your environment, post the source code for reproducibility and the error.\r\n\r\n> in probability_matrix.masked_fill_(torch.tensor(special_tokens_mask, dtype=torch.bool), value=0.0)",
"Please upgrade your Pytorch version to 1.2.0+.",
"You're probably passing in a boolean tensor (true or false) instead of a byte tensor (0 or 1) for your attention mask.\r\n\r\nTry changing\r\n```\r\nprobability_matrix.masked_fill_(torch.tensor(special_tokens_mask, dtype=torch.bool), value=0.0)\r\n```\r\n\r\nto\r\n```\r\nprobability_matrix.masked_fill_(torch.tensor(special_tokens_mask, dtype=torch.uint8), value=0.0)\r\n```"
] | 1,574 | 1,574 | 1,574 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
fine tuning bert use wikitext2 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1947/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1947/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1946 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1946/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1946/comments | https://api.github.com/repos/huggingface/transformers/issues/1946/events | https://github.com/huggingface/transformers/pull/1946 | 528,473,972 | MDExOlB1bGxSZXF1ZXN0MzQ1NTI4NjYz | 1,946 | Fixed typo | {
"login": "AveryLiu",
"id": 5574795,
"node_id": "MDQ6VXNlcjU1NzQ3OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5574795?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AveryLiu",
"html_url": "https://github.com/AveryLiu",
"followers_url": "https://api.github.com/users/AveryLiu/followers",
"following_url": "https://api.github.com/users/AveryLiu/following{/other_user}",
"gists_url": "https://api.github.com/users/AveryLiu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AveryLiu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AveryLiu/subscriptions",
"organizations_url": "https://api.github.com/users/AveryLiu/orgs",
"repos_url": "https://api.github.com/users/AveryLiu/repos",
"events_url": "https://api.github.com/users/AveryLiu/events{/privacy}",
"received_events_url": "https://api.github.com/users/AveryLiu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1946?src=pr&el=h1) Report\n> Merging [#1946](https://codecov.io/gh/huggingface/transformers/pull/1946?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5d3b8daad2cc6287d30f03f8a96d0a1f7bc8d0dc?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1946?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1946 +/- ##\n======================================\n Coverage 84% 84% \n======================================\n Files 97 97 \n Lines 14340 14340 \n======================================\n Hits 12047 12047 \n Misses 2293 2293\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1946?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/1946/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2JlcnQucHk=) | `87.77% <100%> (ø)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1946?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1946?src=pr&el=footer). Last update [5d3b8da...e1d116d](https://codecov.io/gh/huggingface/transformers/pull/1946?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,574 | 1,574 | 1,574 | NONE | null | Changed `emove` to `remove` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1946/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1946/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1946",
"html_url": "https://github.com/huggingface/transformers/pull/1946",
"diff_url": "https://github.com/huggingface/transformers/pull/1946.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1946.patch",
"merged_at": 1574776893000
} |
https://api.github.com/repos/huggingface/transformers/issues/1945 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1945/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1945/comments | https://api.github.com/repos/huggingface/transformers/issues/1945/events | https://github.com/huggingface/transformers/issues/1945 | 528,471,135 | MDU6SXNzdWU1Mjg0NzExMzU= | 1,945 | When using the Bert model | {
"login": "bigzhouj",
"id": 29719942,
"node_id": "MDQ6VXNlcjI5NzE5OTQy",
"avatar_url": "https://avatars.githubusercontent.com/u/29719942?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bigzhouj",
"html_url": "https://github.com/bigzhouj",
"followers_url": "https://api.github.com/users/bigzhouj/followers",
"following_url": "https://api.github.com/users/bigzhouj/following{/other_user}",
"gists_url": "https://api.github.com/users/bigzhouj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bigzhouj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bigzhouj/subscriptions",
"organizations_url": "https://api.github.com/users/bigzhouj/orgs",
"repos_url": "https://api.github.com/users/bigzhouj/repos",
"events_url": "https://api.github.com/users/bigzhouj/events{/privacy}",
"received_events_url": "https://api.github.com/users/bigzhouj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"These two files are **not** in the Transformers library **now**.\r\n\r\n> ## Questions & Help\r\n> pregenerate_training_data.py and finetune_on_pregenerated.py Is it in the project.If so, where?",
"As @TheEdoardo93 says, these files were community maintained and have been removed a few months ago. ",
"so what is the procedure to fine tune BERT on my data?"
] | 1,574 | 1,587 | 1,574 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
pregenerate_training_data.py and finetune_on_pregenerated.py Is it in the project.If so, where? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1945/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1945/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1944 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1944/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1944/comments | https://api.github.com/repos/huggingface/transformers/issues/1944/events | https://github.com/huggingface/transformers/pull/1944 | 528,365,031 | MDExOlB1bGxSZXF1ZXN0MzQ1NDQwOTAw | 1,944 | tokenization progress made more sensible via tqdm | {
"login": "iedmrc",
"id": 13666448,
"node_id": "MDQ6VXNlcjEzNjY2NDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/13666448?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iedmrc",
"html_url": "https://github.com/iedmrc",
"followers_url": "https://api.github.com/users/iedmrc/followers",
"following_url": "https://api.github.com/users/iedmrc/following{/other_user}",
"gists_url": "https://api.github.com/users/iedmrc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iedmrc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iedmrc/subscriptions",
"organizations_url": "https://api.github.com/users/iedmrc/orgs",
"repos_url": "https://api.github.com/users/iedmrc/repos",
"events_url": "https://api.github.com/users/iedmrc/events{/privacy}",
"received_events_url": "https://api.github.com/users/iedmrc/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1944?src=pr&el=h1) Report\n> Merging [#1944](https://codecov.io/gh/huggingface/transformers/pull/1944?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5d3b8daad2cc6287d30f03f8a96d0a1f7bc8d0dc?src=pr&el=desc) will **increase** coverage by `<.01%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1944?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1944 +/- ##\n==========================================\n+ Coverage 84% 84.01% +<.01% \n==========================================\n Files 97 97 \n Lines 14340 14341 +1 \n==========================================\n+ Hits 12047 12048 +1 \n Misses 2293 2293\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1944?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1944/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl91dGlscy5weQ==) | `92.16% <100%> (+0.01%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1944?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1944?src=pr&el=footer). Last update [5d3b8da...36b211d](https://codecov.io/gh/huggingface/transformers/pull/1944?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Ok for me.\r\nOk for you @LysandreJik?",
"I feel this might output too much text when tokenizing a lot of small sequences (which is the case for practically every example). It would be useful when tokenizing large datasets though. Maybe test if the length is superior to, say, 10000 before? What do you think?",
"You might be right, I have never thought about that. But it's a stubborn fact that when the time comes to tokenize larger datasets. How could we test about length? It is not about the `tokenized_text` 's length.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,574 | 1,583 | 1,583 | CONTRIBUTOR | null | Because tokenization takes a relatively long time, having progress being visualized via tqdm would be nice. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1944/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1944/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1944",
"html_url": "https://github.com/huggingface/transformers/pull/1944",
"diff_url": "https://github.com/huggingface/transformers/pull/1944.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1944.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/1943 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1943/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1943/comments | https://api.github.com/repos/huggingface/transformers/issues/1943/events | https://github.com/huggingface/transformers/issues/1943 | 528,356,599 | MDU6SXNzdWU1MjgzNTY1OTk= | 1,943 | [email protected]: Permission denied (publickey) when fetching | {
"login": "iedmrc",
"id": 13666448,
"node_id": "MDQ6VXNlcjEzNjY2NDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/13666448?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iedmrc",
"html_url": "https://github.com/iedmrc",
"followers_url": "https://api.github.com/users/iedmrc/followers",
"following_url": "https://api.github.com/users/iedmrc/following{/other_user}",
"gists_url": "https://api.github.com/users/iedmrc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iedmrc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iedmrc/subscriptions",
"organizations_url": "https://api.github.com/users/iedmrc/orgs",
"repos_url": "https://api.github.com/users/iedmrc/repos",
"events_url": "https://api.github.com/users/iedmrc/events{/privacy}",
"received_events_url": "https://api.github.com/users/iedmrc/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yeah you can clone using https, it's usually easier (github actually recommends it for simple workflows)\r\n\r\ncc @rlouf ",
"Nice! Then, we might update [this line](https://github.com/huggingface/transformers/blame/5d3b8daad2cc6287d30f03f8a96d0a1f7bc8d0dc/CONTRIBUTING.md#L109) since Github encourages https instead of ssh.",
"Thanks for pointing this out, I just made the change."
] | 1,574 | 1,574 | 1,574 | CONTRIBUTOR | null | When you wanted to sync forked repository with base, as described [here](https://github.com/huggingface/transformers/blame/aa92a184d2b92faadec975139ad55e2ae749362c/CONTRIBUTING.md#L140)
You get:
```
➜ transformers (master) ✗ git fetch upstream
[email protected]: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
```
[This](https://stackoverflow.com/a/34081606) solution introduced on stackoverflow fixes the problem. [This](https://stackoverflow.com/questions/13509293/git-fatal-could-not-read-from-remote-repository#comment85002398_34081606) one also says:
> If the repo owner has not set up ssh keys then you will likely have this issue. The fix as indicated is to use https instead, or have the repo owner set up ssh
Could you please fix this (by setting up ssh?), in order to make contributing easy?
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1943/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1943/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1942 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1942/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1942/comments | https://api.github.com/repos/huggingface/transformers/issues/1942/events | https://github.com/huggingface/transformers/issues/1942 | 528,299,333 | MDU6SXNzdWU1MjgyOTkzMzM= | 1,942 | Wrong paraphrase in the TF2/PyTorch README example. | {
"login": "isaprykin",
"id": 234070,
"node_id": "MDQ6VXNlcjIzNDA3MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/234070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/isaprykin",
"html_url": "https://github.com/isaprykin",
"followers_url": "https://api.github.com/users/isaprykin/followers",
"following_url": "https://api.github.com/users/isaprykin/following{/other_user}",
"gists_url": "https://api.github.com/users/isaprykin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/isaprykin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/isaprykin/subscriptions",
"organizations_url": "https://api.github.com/users/isaprykin/orgs",
"repos_url": "https://api.github.com/users/isaprykin/repos",
"events_url": "https://api.github.com/users/isaprykin/events{/privacy}",
"received_events_url": "https://api.github.com/users/isaprykin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, I'm investigating. For now, I confirm the issue that you observe. I've tested on both CPU and GPU and it gives the same result. I've tested with Pytorch and TF models too, same result. Now, let's track the cause!",
"Hi again,\r\nOk I've retrained a Pytorch model using `run_glue.py` on MRPC to check.\r\nThe final metrics are:\r\n\r\n```\r\n***** Eval results *****\r\nacc = 0.8382608695652174\r\nacc_and_f1 = 0.8608840882272851\r\nf1 = 0.8835073068893529\r\n```\r\n\r\nSo it's not crazy high but not near random either.\r\n\r\nThen I've retested:\r\n```\r\nIs \"This research was consistent with his findings\" same as:\r\n\r\n\"His findings were compatible with this research.\" ?\r\nTRUE -> 😄\r\n\r\n\"His findings were not compatible with this research.\" ?\r\nTRUE -> 😢\r\n```\r\n\r\nI've taken a more complex sentence from training set\r\n\r\n```\r\nIs 'Amrozi accused his brother, whom he called \"the witness\", of deliberately distorting his evidence.' same as:\r\n\r\n\"Referring to him as only \"the witness\", Amrozi accused his brother of deliberately distorting his evidence.\" ?\r\nTRUE -> 😄\r\n\r\n\"Referring to him as only \"the witness\", Amrozi accused his brother of not deliberately distorting his evidence.\" ?\r\nTRUE -> 😢\r\n\r\n\"platypus to him as only \"the platypus\", platypus accused his platypus of deliberately platypus his evidence.\" ?\r\nTRUE -> 😭 \r\n\r\n\"platypus to him as only \"the platypus\", platypus accused his platypus of deliberately platypus his platypus.\" ?\r\nFALSE -> 🌝 \r\n```\r\n\r\nHere we see that it's not robust to `not` as in the primary case. Then it's also not robust to replacing any word with `platypus` until I replace 6 words (which is quite disappointing on the performance of the model, it's true).\r\n\r\nI've taken sentences from test set:\r\n\r\n```\r\nIs \"A tropical storm rapidly developed in the Gulf of Mexico Sunday and was expected to hit somewhere along the Texas or Louisiana coasts by Monday night.\" same as:\r\n\r\n\"A tropical storm rapidly developed in the Gulf of Mexico on Sunday and could have hurricane-force winds when it hits land somewhere along the Louisiana coast Monday night.\" ?\r\nTRUE -> 😢\r\n----------------------------------------------------------------------------------------\r\nIs \"The broader Standard & Poor's 500 Index <.SPX> was 0.46 points lower, or 0.05 percent, at 997.02.\" same as:\r\n\r\n\"The technology-laced Nasdaq Composite Index .IXIC was up 7.42 points, or 0.45 percent, at 1,653.44.\" ?\r\nFALSE -> 😄\r\n--------------------------------------------------------------------------------------------\r\nIs \"NASA plans to follow-up the rovers' missions with additional orbiters and landers before launching a long-awaited sample-return flight.\" same as:\r\n\r\n\"NASA plans to explore the Red Planet with ever more sophisticated robotic orbiters and landers.\"\r\nFALSE -> 😄\r\n----------------------------------------------------------------------------------------\r\nIs \"We are piloting it there to see whether we roll it out to other products.\" same as:\r\n\r\n\"Macromedia is piloting this product activation system in Contribute to test whether to roll it out to other products.\"\r\nTRUE -> 😄\r\n```\r\n\r\nHere we see that sometimes it works, sometimes not. I might be wrong but I haven't seen anything in the code that could explain this issue (83% is the final accuracy on dev set... ok but it remains 1 error on 5 cases). A priori, I'd say that basic BERT trained like that on this tiny dataset is simply not that robust for that task in a generalized case and would need more data or at least more data augmentation.\r\n\r\nDo you share my conclusion or see something different?\r\n\r\n\r\n\r\n",
"Thanks for the investigation. Was the performance ever different at the time when that example was put into the README?",
"TBH, personally I wasn't there, so I don't know...\r\nIf anyone at huggingface can answer this question?\r\nI've been looking at MRPC leaderboard https://gluebenchmark.com/leaderboard/ and BERT is around my training above so it looks like a normal score.",
"MRPC is a very small dataset (the smallest among all GLUE benchmark and that's why we use it as an example). I should not be expected to generalize well and be usable in real-life settings.\r\nThe perfrormance you got @mandubian are a normal score indeed.",
"Sounds like we don't think there's an actionable issue here."
] | 1,574 | 1,576 | 1,576 | NONE | null | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): TFBertForSequenceClassification
Language I am using the model on (English, Chinese....): English
The problem arise when using:
* [x] the official example scripts: https://github.com/huggingface/transformers#quick-tour-tf-20-training-and-pytorch-interoperability
* [ ] my own modified scripts: (give details)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: Sequence Classification
* [ ] my own task or dataset: (give details)
## To Reproduce
Steps to reproduce the behavior:
1. Run the attached script.
2. Observe
```
$ /Users/igor/projects/ml-venv/bin/python /Users/igor/projects/transformers-experiments/paraphrasing_issue.py
2019-11-25 08:58:53.985213: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7fed57a2be00 executing computations on platform Host. Devices:
2019-11-25 08:58:53.985243: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Host, Default Version
INFO:absl:Overwrite dataset info from restored data version.
INFO:absl:Reusing dataset glue (/Users/igor/tensorflow_datasets/glue/mrpc/0.0.2)
INFO:absl:Constructing tf.data.Dataset for split None, from /Users/igor/tensorflow_datasets/glue/mrpc/0.0.2
Train for 115 steps, validate for 7 steps
Epoch 1/2
4/115 [>.............................] - ETA: 1:22:04 - loss: 0.6936 5/115 [>.............................] - ETA: 1:18:44 - loss: 0.6876 6/115 [>.............................] - ETA: 1:16:01 - loss: 0.6760115/115 [==============================] - 4587s 40s/step - loss: 0.5850 - accuracy: 0.7045 - val_loss: 0.4695 - val_accuracy: 0.8137
Epoch 2/2
115/115 [==============================] - 4927s 43s/step - loss: 0.3713 - accuracy: 0.8435 - val_loss: 0.3825 - val_accuracy: 0.8358
**sentence_1 is a paraphrase of sentence_0
sentence_2 is a paraphrase of sentence_0**
```
3. Wonder why.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
```
import tensorflow as tf
import tensorflow_datasets
from transformers import *
# Load dataset, tokenizer, model from pretrained model/vocabulary
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
model = TFBertForSequenceClassification.from_pretrained('bert-base-cased')
data = tensorflow_datasets.load('glue/mrpc')
# Prepare dataset for GLUE as a tf.data.Dataset instance
train_dataset = glue_convert_examples_to_features(data['train'], tokenizer, max_length=128, task='mrpc')
valid_dataset = glue_convert_examples_to_features(data['validation'], tokenizer, max_length=128, task='mrpc')
train_dataset = train_dataset.shuffle(100).batch(32).repeat(2)
valid_dataset = valid_dataset.batch(64)
# Prepare training: Compile tf.keras model with optimizer, loss and learning rate schedule
optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08, clipnorm=1.0)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy')
model.compile(optimizer=optimizer, loss=loss, metrics=[metric])
# Train and evaluate using tf.keras.Model.fit()
history = model.fit(train_dataset, epochs=2, steps_per_epoch=115,
validation_data=valid_dataset, validation_steps=7)
# Load the TensorFlow model in PyTorch for inspection
model.save_pretrained('./save/')
pytorch_model = BertForSequenceClassification.from_pretrained('./save/', from_tf=True)
# Quickly test a few predictions - MRPC is a paraphrasing task, let's see if our model learned the task
sentence_0 = "This research was consistent with his findings."
sentence_1 = "His findings were compatible with this research."
sentence_2 = "His findings were not compatible with this research."
inputs_1 = tokenizer.encode_plus(sentence_0, sentence_1, add_special_tokens=True, return_tensors='pt')
inputs_2 = tokenizer.encode_plus(sentence_0, sentence_2, add_special_tokens=True, return_tensors='pt')
pred_1 = pytorch_model(inputs_1['input_ids'], token_type_ids=inputs_1['token_type_ids'])[0].argmax().item()
pred_2 = pytorch_model(inputs_2['input_ids'], token_type_ids=inputs_2['token_type_ids'])[0].argmax().item()
print("sentence_1 is", "a paraphrase" if pred_1 else "not a paraphrase", "of sentence_0")
print("sentence_2 is", "a paraphrase" if pred_2 else "not a paraphrase", "of sentence_0")
```
## Expected behavior
```
sentence_1 is a paraphrase of sentence_0
sentence_2 is not a paraphrase of sentence_0
```
## Environment
* OS: MacOS
* Python version: 3.7.5
* PyTorch version: 1.3.1
* PyTorch Transformers version (or branch): last commit afaa33585109550f9ecaaee4e47f187aaaefedd0 as of Sat Nov 23 11:34:45 2019 -0500.
* Using GPU ? nope
* Distributed of parallel setup ? single machine
* Any other relevant information: TF version is 2.0.0
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1942/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/1942/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1941 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1941/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1941/comments | https://api.github.com/repos/huggingface/transformers/issues/1941/events | https://github.com/huggingface/transformers/issues/1941 | 528,224,042 | MDU6SXNzdWU1MjgyMjQwNDI= | 1,941 | NER - sciBERT weights not initialized. | {
"login": "zampierimatteo91",
"id": 40203129,
"node_id": "MDQ6VXNlcjQwMjAzMTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/40203129?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zampierimatteo91",
"html_url": "https://github.com/zampierimatteo91",
"followers_url": "https://api.github.com/users/zampierimatteo91/followers",
"following_url": "https://api.github.com/users/zampierimatteo91/following{/other_user}",
"gists_url": "https://api.github.com/users/zampierimatteo91/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zampierimatteo91/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zampierimatteo91/subscriptions",
"organizations_url": "https://api.github.com/users/zampierimatteo91/orgs",
"repos_url": "https://api.github.com/users/zampierimatteo91/repos",
"events_url": "https://api.github.com/users/zampierimatteo91/events{/privacy}",
"received_events_url": "https://api.github.com/users/zampierimatteo91/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi, this means that the script did not find your tokenization file. You're pointing to the folder `../scibert_model` but either that folder does not exist, either it does not contain a `vocab.txt` file which is required by the `BertTokenizer`.",
"Hi LysandreJik,\r\n\r\nMany thanks for your reply.\r\n`vocab.txt` does indeed exist, as well as the folder. I can see it is loaded in the section I proposed above.\r\nIt also states that the weights from the provided model in the folder are loaded, but then it specifies that weights for `BertForTokenClassification` are not. \r\nAre there weights for separate objects?\r\nSorry for the stupid questions, just trying to understand whether I'm doing things the proper way.",
"Could you try to load the tokenizer/model in a standalone script, or in a python console? Here are a the required commands to load a tokenizer and a model from a saved checkpoint:\r\n\r\n```py\r\nfrom transformers import BertTokenizer, BertModelForTokenClassification\r\n\r\ntokenizer = BertTokenizer.from_pretrained(folder)\r\nmodel = BertModelForTokenClassification.from_pretrained(folder)\r\n```\r\n\r\nThanks!",
"Sure, I loaded it in Ipython.\r\nI just changed `BertModelForTokenClassification` to `BertForTokenClassification` and I'm in folder `/transformers` instead of `transformers/examples`:\r\n```\r\nIn [2]: from transformers import BertTokenizer, BertForTokenClassification \r\n\r\nIn [3]: tokenizer = BertTokenizer.from_pretrained('./scibert_model/') \r\n\r\nIn [4]: model = BertForTokenClassification.from_pretrained('./scibert_model/') \r\n\r\nIn [5]: print(tokenizer) \r\n<transformers.tokenization_bert.BertTokenizer object at 0x7fe34c08add8>\r\n\r\nIn [6]: print(model) \r\nBertForTokenClassification(\r\n (bert): BertModel(\r\n (embeddings): BertEmbeddings(\r\n (word_embeddings): Embedding(31090, 768, padding_idx=0)\r\n (position_embeddings): Embedding(512, 768)\r\n (token_type_embeddings): Embedding(2, 768)\r\n (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n (encoder): BertEncoder(\r\n (layer): ModuleList(\r\n (0): BertLayer(\r\n (attention): BertAttention(\r\n (self): BertSelfAttention(\r\n (query): Linear(in_features=768, out_features=768, bias=True)\r\n (key): Linear(in_features=768, out_features=768, bias=True)\r\n (value): Linear(in_features=768, out_features=768, bias=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n (output): BertSelfOutput(\r\n (dense): Linear(in_features=768, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (intermediate): BertIntermediate(\r\n (dense): Linear(in_features=768, out_features=3072, bias=True)\r\n )\r\n (output): BertOutput(\r\n (dense): Linear(in_features=3072, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (1): BertLayer(\r\n (attention): BertAttention(\r\n (self): BertSelfAttention(\r\n (query): Linear(in_features=768, out_features=768, bias=True)\r\n (key): Linear(in_features=768, out_features=768, bias=True)\r\n (value): Linear(in_features=768, out_features=768, bias=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n (output): BertSelfOutput(\r\n (dense): Linear(in_features=768, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (intermediate): BertIntermediate(\r\n (dense): Linear(in_features=768, out_features=3072, bias=True)\r\n )\r\n (output): BertOutput(\r\n (dense): Linear(in_features=3072, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (2): BertLayer(\r\n (attention): BertAttention(\r\n (self): BertSelfAttention(\r\n (query): Linear(in_features=768, out_features=768, bias=True)\r\n (key): Linear(in_features=768, out_features=768, bias=True)\r\n (value): Linear(in_features=768, out_features=768, bias=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n (output): BertSelfOutput(\r\n (dense): Linear(in_features=768, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (intermediate): BertIntermediate(\r\n (dense): Linear(in_features=768, out_features=3072, bias=True)\r\n )\r\n (output): BertOutput(\r\n (dense): Linear(in_features=3072, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (3): BertLayer(\r\n (attention): BertAttention(\r\n (self): BertSelfAttention(\r\n (query): Linear(in_features=768, out_features=768, bias=True)\r\n (key): Linear(in_features=768, out_features=768, bias=True)\r\n (value): Linear(in_features=768, out_features=768, bias=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n (output): BertSelfOutput(\r\n (dense): Linear(in_features=768, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (intermediate): BertIntermediate(\r\n (dense): Linear(in_features=768, out_features=3072, bias=True)\r\n )\r\n (output): BertOutput(\r\n (dense): Linear(in_features=3072, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (4): BertLayer(\r\n (attention): BertAttention(\r\n (self): BertSelfAttention(\r\n (query): Linear(in_features=768, out_features=768, bias=True)\r\n (key): Linear(in_features=768, out_features=768, bias=True)\r\n (value): Linear(in_features=768, out_features=768, bias=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n (output): BertSelfOutput(\r\n (dense): Linear(in_features=768, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (intermediate): BertIntermediate(\r\n (dense): Linear(in_features=768, out_features=3072, bias=True)\r\n )\r\n (output): BertOutput(\r\n (dense): Linear(in_features=3072, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (5): BertLayer(\r\n (attention): BertAttention(\r\n (self): BertSelfAttention(\r\n (query): Linear(in_features=768, out_features=768, bias=True)\r\n (key): Linear(in_features=768, out_features=768, bias=True)\r\n (value): Linear(in_features=768, out_features=768, bias=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n (output): BertSelfOutput(\r\n (dense): Linear(in_features=768, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (intermediate): BertIntermediate(\r\n (dense): Linear(in_features=768, out_features=3072, bias=True)\r\n )\r\n (output): BertOutput(\r\n (dense): Linear(in_features=3072, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (6): BertLayer(\r\n (attention): BertAttention(\r\n (self): BertSelfAttention(\r\n (query): Linear(in_features=768, out_features=768, bias=True)\r\n (key): Linear(in_features=768, out_features=768, bias=True)\r\n (value): Linear(in_features=768, out_features=768, bias=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n (output): BertSelfOutput(\r\n (dense): Linear(in_features=768, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (intermediate): BertIntermediate(\r\n (dense): Linear(in_features=768, out_features=3072, bias=True)\r\n )\r\n (output): BertOutput(\r\n (dense): Linear(in_features=3072, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (7): BertLayer(\r\n (attention): BertAttention(\r\n (self): BertSelfAttention(\r\n (query): Linear(in_features=768, out_features=768, bias=True)\r\n (key): Linear(in_features=768, out_features=768, bias=True)\r\n (value): Linear(in_features=768, out_features=768, bias=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n (output): BertSelfOutput(\r\n (dense): Linear(in_features=768, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (intermediate): BertIntermediate(\r\n (dense): Linear(in_features=768, out_features=3072, bias=True)\r\n )\r\n (output): BertOutput(\r\n (dense): Linear(in_features=3072, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (8): BertLayer(\r\n (attention): BertAttention(\r\n (self): BertSelfAttention(\r\n (query): Linear(in_features=768, out_features=768, bias=True)\r\n (key): Linear(in_features=768, out_features=768, bias=True)\r\n (value): Linear(in_features=768, out_features=768, bias=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n (output): BertSelfOutput(\r\n (dense): Linear(in_features=768, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (intermediate): BertIntermediate(\r\n (dense): Linear(in_features=768, out_features=3072, bias=True)\r\n )\r\n (output): BertOutput(\r\n (dense): Linear(in_features=3072, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (9): BertLayer(\r\n (attention): BertAttention(\r\n (self): BertSelfAttention(\r\n (query): Linear(in_features=768, out_features=768, bias=True)\r\n (key): Linear(in_features=768, out_features=768, bias=True)\r\n (value): Linear(in_features=768, out_features=768, bias=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n (output): BertSelfOutput(\r\n (dense): Linear(in_features=768, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (intermediate): BertIntermediate(\r\n (dense): Linear(in_features=768, out_features=3072, bias=True)\r\n )\r\n (output): BertOutput(\r\n (dense): Linear(in_features=3072, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (10): BertLayer(\r\n (attention): BertAttention(\r\n (self): BertSelfAttention(\r\n (query): Linear(in_features=768, out_features=768, bias=True)\r\n (key): Linear(in_features=768, out_features=768, bias=True)\r\n (value): Linear(in_features=768, out_features=768, bias=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n (output): BertSelfOutput(\r\n (dense): Linear(in_features=768, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (intermediate): BertIntermediate(\r\n (dense): Linear(in_features=768, out_features=3072, bias=True)\r\n )\r\n (output): BertOutput(\r\n (dense): Linear(in_features=3072, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (11): BertLayer(\r\n (attention): BertAttention(\r\n (self): BertSelfAttention(\r\n (query): Linear(in_features=768, out_features=768, bias=True)\r\n (key): Linear(in_features=768, out_features=768, bias=True)\r\n (value): Linear(in_features=768, out_features=768, bias=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n (output): BertSelfOutput(\r\n (dense): Linear(in_features=768, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (intermediate): BertIntermediate(\r\n (dense): Linear(in_features=768, out_features=3072, bias=True)\r\n )\r\n (output): BertOutput(\r\n (dense): Linear(in_features=3072, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n )\r\n )\r\n (pooler): BertPooler(\r\n (dense): Linear(in_features=768, out_features=768, bias=True)\r\n (activation): Tanh()\r\n )\r\n )\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n (classifier): Linear(in_features=768, out_features=2, bias=True)\r\n)\r\n\r\n\r\n```",
"Ah right, my mistake I thought there was an error in your first message but there actually is none, it's just a warning! I misunderstood.\r\n\r\nThe first warning concerning the tokenizer means that no special tokens were added when the vocabulary was saved.\r\n\r\nThe second warning means that some weights were not loaded by the model: `['classifier.weight', 'classifier.bias']` and that some weights were not present in the checkpoint: ` ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias']`.\r\n\r\nThis means that if you want to use this model for some task you will need to fine-tune it as the classifier layers were initialized randomly. This is the case for most of our models as each task requires specific training.",
"Many thanks for your quick response and your availability, @LysandreJik! \r\n\r\nBy fine-tuning it, do you mean I should run it in training and evaluation mode without prediction?",
"Fine-tuning a model means that in **training mode**:\r\n- you initialize the weights of the entire model you have equals to the ones in SciBERT model\r\n- after that, you train the model with _your own data_ in order to obtain better performance on your specific task\r\n\r\nOnce you have finished to train the model, you can use for **prediction purpose** and see whether the model has enough accuracy for your task.\r\n\r\n> Many thanks for your quick response and your availability, @LysandreJik!\r\n> \r\n> By fine-tuning it, do you mean I should run it in training and evaluation mode without prediction?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,574 | 1,580 | 1,580 | NONE | null | ## ❓ Questions & Help
<!-- Custom weights not initialized when training NER on dataset. -->
Hi all,
first of all thanks for this awesome interface.
Coming to the issue:
I am trying out NER on the Anatem dataset, using Google Colab's GPU.
I imported SciBERT (and BioBERT) models with the solutions provided in issue [457](https://github.com/huggingface/transformers/issues/457).
For clarity, batch_size is 8 because when set to 16 the GPU goes into seg fault.
the scritp is the following. I am into the `transformers/examples` folder
```
!python3 run_ner.py --data_dir ../datasets/ \
--model_type bert \
--labels ../datasets/labels.txt \
--model_name_or_path ../scibert_model \
--output_dir ../results_scibert \
--max_seq_length 512 \
--num_train_epochs 3 \
--per_gpu_train_batch_size 8 \
--save_steps 750 \
--seed 1 \
--do_train \
--do_eval \
--do_predict
```
And the warning is:
```
11/25/2019 16:47:14 - INFO - transformers.tokenization_utils - Model name '../scibert_model' not found in model shortcut name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased). Assuming '../scibert_model' is a path or url to a directory containing tokenizer files.
11/25/2019 16:47:14 - INFO - transformers.tokenization_utils - Didn't find file ../scibert_model/added_tokens.json. We won't load it.
11/25/2019 16:47:14 - INFO - transformers.tokenization_utils - Didn't find file ../scibert_model/special_tokens_map.json. We won't load it.
11/25/2019 16:47:14 - INFO - transformers.tokenization_utils - Didn't find file ../scibert_model/tokenizer_config.json. We won't load it.
11/25/2019 16:47:14 - INFO - transformers.tokenization_utils - loading file ../scibert_model/vocab.txt
11/25/2019 16:47:14 - INFO - transformers.tokenization_utils - loading file None
11/25/2019 16:47:14 - INFO - transformers.tokenization_utils - loading file None
11/25/2019 16:47:14 - INFO - transformers.tokenization_utils - loading file None
11/25/2019 16:47:14 - INFO - transformers.modeling_utils - loading weights file ../scibert_model/pytorch_model.bin
11/25/2019 16:47:20 - INFO - transformers.modeling_utils - Weights of BertForTokenClassification not initialized from pretrained model: ['classifier.weight', 'classifier.bias']
11/25/2019 16:47:20 - INFO - transformers.modeling_utils - Weights from pretrained model not used in BertForTokenClassification: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias']
```
Could you please explain the meaning of this? I have read other issues about it, but I didn't really grasp the meaning and the solution.
Thank you very much! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1941/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1941/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1940 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1940/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1940/comments | https://api.github.com/repos/huggingface/transformers/issues/1940/events | https://github.com/huggingface/transformers/pull/1940 | 528,217,548 | MDExOlB1bGxSZXF1ZXN0MzQ1MzIyNTM0 | 1,940 | Add TF2 NER example | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1940?src=pr&el=h1) Report\n> Merging [#1940](https://codecov.io/gh/huggingface/transformers/pull/1940?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/88b317739fe56888528c857fc8e90967148a0051?src=pr&el=desc) will **decrease** coverage by `1.04%`.\n> The diff coverage is `51.37%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1940?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1940 +/- ##\n==========================================\n- Coverage 84.26% 83.21% -1.05% \n==========================================\n Files 104 106 +2 \n Lines 15431 15679 +248 \n==========================================\n+ Hits 13003 13048 +45 \n- Misses 2428 2631 +203\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1940?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/tokenization\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/1940/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl9kaXN0aWxiZXJ0LnB5) | `100% <ø> (ø)` | :arrow_up: |\n| [transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/1940/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2Rpc3RpbGJlcnQucHk=) | `95.87% <ø> (ø)` | :arrow_up: |\n| [transformers/configuration\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/1940/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fZGlzdGlsYmVydC5weQ==) | `89.74% <ø> (ø)` | :arrow_up: |\n| [transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/1940/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX2F1dG8ucHk=) | `32.5% <ø> (-18.75%)` | :arrow_down: |\n| [transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/1940/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2F1dG8ucHk=) | `31.11% <0%> (-0.71%)` | :arrow_down: |\n| [transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1940/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3B5dG9yY2hfdXRpbHMucHk=) | `95.13% <0%> (+2.18%)` | :arrow_up: |\n| [transformers/modeling\\_tf\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/1940/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3RyYW5zZm9feGxfdXRpbGl0aWVzLnB5) | `85.57% <100%> (ø)` | :arrow_up: |\n| [transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1940/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3V0aWxzLnB5) | `90.86% <100%> (+1.42%)` | :arrow_up: |\n| [transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1940/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fdXRpbHMucHk=) | `88.31% <100%> (-3.8%)` | :arrow_down: |\n| [transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/1940/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX29wZW5haS5weQ==) | `95.92% <100%> (ø)` | :arrow_up: |\n| ... and [38 more](https://codecov.io/gh/huggingface/transformers/pull/1940/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1940?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1940?src=pr&el=footer). Last update [88b3177...938da1c](https://codecov.io/gh/huggingface/transformers/pull/1940?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I finished to implement the TF2 NER, it should be really similar to the Pytorch one as I tried to reproduce most of the parameters.",
"Thanks for your contribution @jplu \r\n\r\nDo you mind doing a clean rebase (and force-push to this branch), or create a new PR, with just your changes?",
"I will remake the PR, that will be cleaner :)"
] | 1,574 | 1,575 | 1,575 | CONTRIBUTOR | null | Hi,
Here my small contribution. I have implemented the TF2 version of the NER example already existing in the repo. I tried to have an implementation as close as possible of the Pytorch version.
Let me know for any needed changes :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1940/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1940/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1940",
"html_url": "https://github.com/huggingface/transformers/pull/1940",
"diff_url": "https://github.com/huggingface/transformers/pull/1940.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1940.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/1939 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1939/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1939/comments | https://api.github.com/repos/huggingface/transformers/issues/1939/events | https://github.com/huggingface/transformers/issues/1939 | 528,215,183 | MDU6SXNzdWU1MjgyMTUxODM= | 1,939 | Abruptly model training was stopped | {
"login": "ellurunaresh",
"id": 10192331,
"node_id": "MDQ6VXNlcjEwMTkyMzMx",
"avatar_url": "https://avatars.githubusercontent.com/u/10192331?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ellurunaresh",
"html_url": "https://github.com/ellurunaresh",
"followers_url": "https://api.github.com/users/ellurunaresh/followers",
"following_url": "https://api.github.com/users/ellurunaresh/following{/other_user}",
"gists_url": "https://api.github.com/users/ellurunaresh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ellurunaresh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ellurunaresh/subscriptions",
"organizations_url": "https://api.github.com/users/ellurunaresh/orgs",
"repos_url": "https://api.github.com/users/ellurunaresh/repos",
"events_url": "https://api.github.com/users/ellurunaresh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ellurunaresh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,574 | 1,580 | 1,580 | NONE | null | I'm fine tuning NER custom dataset using BERT cased model. I'm using script of examples/run_ner.py to fine tune. **In 2nd epoch training was stopped abruptly without displaying any error.**
Epoch: 67%|██████▋ | 2/3 [2:48:41<1:24:21, 5061.32s/it]
Iteration: 98%|█████████▊| 3734/3816 [1:22:30<01:48, 1.32s/it]
Iteration: 98%|█████████▊| 3735/3816 [1:22:31<01:47, 1.33s/it]
**The training was stopped at 98%.**
Training details are given here:
no. of training sentences: 122095, batch size=32, num_epochs=3, save_steps=750, GPU server: Tesla K40m
Could anybody help me out how to solve this issue and please let me know if you need any further information.
Thanks in advance
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1939/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1939/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1938 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1938/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1938/comments | https://api.github.com/repos/huggingface/transformers/issues/1938/events | https://github.com/huggingface/transformers/issues/1938 | 528,186,316 | MDU6SXNzdWU1MjgxODYzMTY= | 1,938 | Load output file from fine-tuned bert language model | {
"login": "avinashsai",
"id": 22453634,
"node_id": "MDQ6VXNlcjIyNDUzNjM0",
"avatar_url": "https://avatars.githubusercontent.com/u/22453634?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/avinashsai",
"html_url": "https://github.com/avinashsai",
"followers_url": "https://api.github.com/users/avinashsai/followers",
"following_url": "https://api.github.com/users/avinashsai/following{/other_user}",
"gists_url": "https://api.github.com/users/avinashsai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/avinashsai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avinashsai/subscriptions",
"organizations_url": "https://api.github.com/users/avinashsai/orgs",
"repos_url": "https://api.github.com/users/avinashsai/repos",
"events_url": "https://api.github.com/users/avinashsai/events{/privacy}",
"received_events_url": "https://api.github.com/users/avinashsai/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hmm this script should not output any `.txt` files except `eval_results.txt`. What is inside this output directory except for this file?",
"I didn't provide any evaluation file.\r\nHowever logging info is as follows:\r\n\r\n- Creating features from dataset file at \r\n- Saving features into cached file **bert-base-cased_cached_lm_32.txt**"
] | 1,574 | 1,574 | 1,574 | NONE | null | Hi,
I have fine-tuned bert cased language model using **run_lm_finetuning.py**. 'output' is the output file directory and '**bert-base-cased.txt**' is another file created by the model.
1. Does the .txt file mentioned has the output of fine-tuned model??
2. If so, how should I open the file?? I am getting 'UTF-8' encoding issue with the file.
Thank you.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1938/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1938/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1937 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1937/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1937/comments | https://api.github.com/repos/huggingface/transformers/issues/1937/events | https://github.com/huggingface/transformers/issues/1937 | 528,176,205 | MDU6SXNzdWU1MjgxNzYyMDU= | 1,937 | access to the vocabulary | {
"login": "weiguowilliam",
"id": 31396452,
"node_id": "MDQ6VXNlcjMxMzk2NDUy",
"avatar_url": "https://avatars.githubusercontent.com/u/31396452?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/weiguowilliam",
"html_url": "https://github.com/weiguowilliam",
"followers_url": "https://api.github.com/users/weiguowilliam/followers",
"following_url": "https://api.github.com/users/weiguowilliam/following{/other_user}",
"gists_url": "https://api.github.com/users/weiguowilliam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/weiguowilliam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/weiguowilliam/subscriptions",
"organizations_url": "https://api.github.com/users/weiguowilliam/orgs",
"repos_url": "https://api.github.com/users/weiguowilliam/repos",
"events_url": "https://api.github.com/users/weiguowilliam/events{/privacy}",
"received_events_url": "https://api.github.com/users/weiguowilliam/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You can obtain the **50.257 different tokens** with the following code:\r\n```\r\nimport transformers\r\nfrom transformers import GPT2Tokenizer\r\n\r\ntokenizer = GPT2Tokenizer.from_pretrained('gpt2')\r\nvocab = list(tokenizer.encoder.keys())\r\nassert(len(vocab) == tokenizer.vocab_size) # it returns True!\r\n```\r\n\r\nClose the issue if you've resolved your problem! ;)\r\n\r\n> ## Questions & Help\r\n> Is there any way we can get access to the vocabulary in GPT2? Like a list: [subtoken1, subtoken2, ...subtoken 10000...]\r\n> \r\n> Thank you in advance!",
"thank you!"
] | 1,574 | 1,574 | 1,574 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
Is there any way we can get access to the vocabulary in GPT2? Like a list: [subtoken1, subtoken2, ...subtoken 10000...]
Thank you in advance! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1937/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1937/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1936 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1936/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1936/comments | https://api.github.com/repos/huggingface/transformers/issues/1936/events | https://github.com/huggingface/transformers/issues/1936 | 528,056,906 | MDU6SXNzdWU1MjgwNTY5MDY= | 1,936 | how to output specific layer of TFBertForSequenceClassification, or add layer? | {
"login": "roccqqck",
"id": 34628766,
"node_id": "MDQ6VXNlcjM0NjI4NzY2",
"avatar_url": "https://avatars.githubusercontent.com/u/34628766?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/roccqqck",
"html_url": "https://github.com/roccqqck",
"followers_url": "https://api.github.com/users/roccqqck/followers",
"following_url": "https://api.github.com/users/roccqqck/following{/other_user}",
"gists_url": "https://api.github.com/users/roccqqck/gists{/gist_id}",
"starred_url": "https://api.github.com/users/roccqqck/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/roccqqck/subscriptions",
"organizations_url": "https://api.github.com/users/roccqqck/orgs",
"repos_url": "https://api.github.com/users/roccqqck/repos",
"events_url": "https://api.github.com/users/roccqqck/events{/privacy}",
"received_events_url": "https://api.github.com/users/roccqqck/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Please, copy and paste the source code in order to reproduce your problem.\r\n\r\n> how to output the last layer of TFBertForSequenceClassification?\r\n> \r\n> I want to output the layer before classifier (Dense)\r\n> \r\n> ```\r\n> Model: \"tf_bert_for_sequence_classification\"\r\n> _________________________________________________________________\r\n> Layer (type) Output Shape Param # \r\n> =================================================================\r\n> bert (TFBertMainLayer) multiple 102267648 \r\n> _________________________________________________________________\r\n> dropout_37 (Dropout) multiple 0 \r\n> _________________________________________________________________\r\n> classifier (Dense) multiple 3845 \r\n> =================================================================\r\n> Total params: 102,271,493\r\n> Trainable params: 102,271,493\r\n> Non-trainable params: 0\r\n> ```\r\n> \r\n> I tried tf.keras function\r\n> \r\n> ```\r\n> dense1_layer_model = Model(inputs=model.input, outputs=model.get_layer('bert').output)\r\n> ```\r\n> \r\n> It didnt worked.",
"> Please, copy and paste the source code in order to reproduce your problem.\r\n\r\nthis is my original code\r\n```\r\nmodel = TFBertForSequenceClassification.from_pretrained('bert-base-chinese', num_labels=5)\r\nmodel.summary()\r\n```\r\n```\r\nModel: \"tf_bert_for_sequence_classification\"\r\n_________________________________________________________________\r\nLayer (type) Output Shape Param # \r\n=================================================================\r\nbert (TFBertMainLayer) multiple 102267648 \r\n_________________________________________________________________\r\ndropout_37 (Dropout) multiple 0 \r\n_________________________________________________________________\r\nclassifier (Dense) multiple 3845 \r\n=================================================================\r\nTotal params: 102,271,493\r\nTrainable params: 102,271,493\r\nNon-trainable params: 0\r\n```\r\n\r\n\r\nI know if I use TFBertModel, I could get the (N, 512, 768) output without fine tuning.\r\n```\r\nmodel = TFBertModel.from_pretrained('bert-base-chinese')\r\nmodel.summary()\r\n```\r\nBut I need the (N, 512, 768) output after fine tuning.",
"I tried this too\r\n```\r\nmodel = Sequential()\r\nmodel.add( TFBertModel.from_pretrained('bert-base-chinese') )\r\nmodel.add( Dropout(0.5))\r\nmodel.add( Dense(5,activation=\"softmax\") )\r\nmodel.summary()\r\n```\r\n```\r\nValueError: This model has not yet been built. Build the model first by calling `build()` or calling `fit()` with some data, or specify an `input_shape` argument in the first layer(s) for automatic build.\r\n```",
"In order to create a `Sequential` model with TensorFlow.Keras framework, you have to specify the input shape through `input_shape` parameter on the input layer, otherwise TensorFlow.Keras doesn't know the input shape of the model you're creating.",
"> In order to create a `Sequential` model with TensorFlow.Keras framework, you have to specify the input shape through `input_shape` parameter on the input layer, otherwise TensorFlow.Keras doesn't know the input shape of the model you're creating.\r\n\r\nadd layer\r\n```\r\ninput_layer = Input(shape = (512,), dtype='int64')\r\nbert = TFBertModel.from_pretrained('bert-base-chinese')(input_layer)\r\nbert = bert[0] # i think there is a bug here\r\nflat = Flatten()(bert)\r\nclassifier = Dense(units=5)(flat)\r\nmodel = Model(inputs=input_layer, outputs=classifier)\r\nmodel.summary()\r\n```\r\n```\r\nModel: \"model_1\"\r\n_________________________________________________________________\r\nLayer (type) Output Shape Param # \r\n=================================================================\r\ninput_4 (InputLayer) [(None, 512)] 0 \r\n_________________________________________________________________\r\ntf_bert_model_3 (TFBertModel ((None, 512, 768), (None, 102267648 \r\n_________________________________________________________________\r\nflatten_2 (Flatten) (None, 393216) 0 \r\n_________________________________________________________________\r\ndense_1 (Dense) (None, 5) 1966085 \r\n=================================================================\r\nTotal params: 104,233,733\r\nTrainable params: 104,233,733\r\nNon-trainable params: 0\r\n```\r\n\r\n\r\nthanks it worked!!!\r\n\r\n\r\n\r\nfit\r\n```\r\noptimizer = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08, clipnorm=1.0)\r\nloss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)\r\nmetric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy')\r\nmodel.compile(optimizer=optimizer, loss=loss, metrics=[metric])\r\n\r\nmodel_fit = model.fit(train_input_ids, train_label, \r\n batch_size=4, epochs=4, \r\n validation_data=(validation_input_ids, validation_label)\r\n )\r\n```\r\n\r\nextract layer\r\n```\r\nflatten_layer_model = Model(inputs=model.input, outputs=model.get_layer('flatten_2').output)\r\npredictions = flatten_layer_model.predict(validation_input_ids)\r\nprint(type(predictions))\r\nprint(predictions.shape)\r\n```\r\n```\r\n<class 'numpy.ndarray'>\r\n(8359, 393216)\r\n```",
"Hi @roccqqck, I am also doing something similar. Most of my queries are cleared by your comment. I have just one more doubt. The [documentation](https://huggingface.co/transformers/model_doc/bert.html#tfbertforsequenceclassification) states that the input of model should look like this `[input_ids, attention_mask]`. So, are you providing attention mask as input?\r\n\r\nHave you uploaded full the code mentioned above on your github with data? If yes, can you please share the link?",
"@sainimohit23 \r\nI didn’t provide attention mask."
] | 1,574 | 1,582 | 1,574 | NONE | null | how to output the last layer of TFBertForSequenceClassification?
I want to output the layer before classifier (Dense)
```
Model: "tf_bert_for_sequence_classification"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
bert (TFBertMainLayer) multiple 102267648
_________________________________________________________________
dropout_37 (Dropout) multiple 0
_________________________________________________________________
classifier (Dense) multiple 3845
=================================================================
Total params: 102,271,493
Trainable params: 102,271,493
Non-trainable params: 0
```
I tried tf.keras function
```
dense1_layer_model = Model(inputs=model.input, outputs=model.get_layer('bert').output)
```
It didnt worked. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1936/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1936/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1935 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1935/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1935/comments | https://api.github.com/repos/huggingface/transformers/issues/1935/events | https://github.com/huggingface/transformers/issues/1935 | 527,932,492 | MDU6SXNzdWU1Mjc5MzI0OTI= | 1,935 | attention_mask added, not multiplied ... is this correct? | {
"login": "fginter",
"id": 644401,
"node_id": "MDQ6VXNlcjY0NDQwMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/644401?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fginter",
"html_url": "https://github.com/fginter",
"followers_url": "https://api.github.com/users/fginter/followers",
"following_url": "https://api.github.com/users/fginter/following{/other_user}",
"gists_url": "https://api.github.com/users/fginter/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fginter/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fginter/subscriptions",
"organizations_url": "https://api.github.com/users/fginter/orgs",
"repos_url": "https://api.github.com/users/fginter/repos",
"events_url": "https://api.github.com/users/fginter/events{/privacy}",
"received_events_url": "https://api.github.com/users/fginter/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Ping. :) To me this still looks like the code actually fails to apply the attention mask and also the parts of the sequence intended to be masked are accessible. The line is now https://github.com/huggingface/transformers/blob/master/transformers/modeling_bert.py#L237",
"Hi, yes this is correct. Inside the `BertModel` forward method, the `attention_mask` is set to `0` for the tokens which should be attended (no modification) and `-10000` for the tokens which must be ignored, resulting in nullification of their attention scores.\r\n\r\nYou can read the relevant source code [here](https://github.com/huggingface/transformers/blob/master/transformers/modeling_bert.py#L683).",
"I see. Thank you very much for the informative answer!",
"Thank you for the explanation. I was thinking the same thing when reading the code.\r\n\r\nIn that case, shouldn't the `attention_mark` input for BertEncoder, BertLayer, ... be renamed to `extended_attention_mask` or `scaled_attention_mark`? Because those inner modules do expect the scaled (and reshaped?) mark, not the user input attention_mark.\r\n\r\nJust a suggestion.\r\n"
] | 1,574 | 1,578 | 1,576 | NONE | null | https://github.com/huggingface/transformers/blob/master/transformers/modeling_bert.py#L233
This line adds (+ operator) the attention mask. I wonder whether this is correct, as I would have very much expected the mask to be multiplied. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1935/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 3
} | https://api.github.com/repos/huggingface/transformers/issues/1935/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1934 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1934/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1934/comments | https://api.github.com/repos/huggingface/transformers/issues/1934/events | https://github.com/huggingface/transformers/issues/1934 | 527,920,571 | MDU6SXNzdWU1Mjc5MjA1NzE= | 1,934 | Download model too slow, is there any way | {
"login": "bigzhouj",
"id": 29719942,
"node_id": "MDQ6VXNlcjI5NzE5OTQy",
"avatar_url": "https://avatars.githubusercontent.com/u/29719942?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bigzhouj",
"html_url": "https://github.com/bigzhouj",
"followers_url": "https://api.github.com/users/bigzhouj/followers",
"following_url": "https://api.github.com/users/bigzhouj/following{/other_user}",
"gists_url": "https://api.github.com/users/bigzhouj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bigzhouj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bigzhouj/subscriptions",
"organizations_url": "https://api.github.com/users/bigzhouj/orgs",
"repos_url": "https://api.github.com/users/bigzhouj/repos",
"events_url": "https://api.github.com/users/bigzhouj/events{/privacy}",
"received_events_url": "https://api.github.com/users/bigzhouj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"If your model download is too slow and fails, you can manually download it from our S3 using your browser, wget or cURL as an alternative method.\r\n\r\nYou can then point to a directory that has both the model weights (xxx-pytorch_model.bin) and the configuration file (xxx-config.json) instead of the checkpoint name as the argument for `run_lm_finetuning.py`.",
"The models on s3 are downloaded by **botocore**. And can be accelerated using a proxy. Detailed information can be found on [](https://botocore.amazonaws.com/v1/documentation/api/latest/reference/config.html ).\r\nBecause It only supports **http** proxy now, other form of proxies like socks5 need to be converted to a http form.",
"OSError: Couldn't reach server at 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-config.json' to download pretrained model configuration file. ",
"Can you open this [ https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-config.json](url) in your browser ?",
"It can be opened in a browser",
"in run_lm_finetuning.py,\r\n probability_matrix.masked_fill_(torch.tensor(special_tokens_mask, dtype=torch.bool), value=0.0).\r\nWhy does dtyped equal torch.bool?\r\nI have a difficulty here:\r\n Expected object of scalar type Byte but got scalar type Bool for argument #2 'mask'",
"Tipically, when you say _masked_ *, you want to use boolean values (0 for absence and 1 for presence). In this particular case (rows n.144-151), you are sampling some tokens in the in each sequence for **masked** language modeling. For this reason, the _probability_matrix_ variable is being set to boolean values. In fact, the first argument of the _masked_fill()_ method is a boolean Torch tensor (i.e. the boolean vector). You can read more info in the PyTorch docs [here](https://pytorch.org/docs/stable/tensors.html).\r\n\r\nFor what concern your issue, post the code for reproducibility and version of TensorFlow, PyTorch, Transformers.\r\n\r\n> in run_lm_finetuning.py,\r\n> probability_matrix.masked_fill_(torch.tensor(special_tokens_mask, dtype=torch.bool), value=0.0).\r\n> Why does dtyped equal torch.bool?\r\n> I have a difficulty here:\r\n> Expected object of scalar type Byte but got scalar type Bool for argument #2 'mask'",
"@bigzhouj this is probably due to a Pytorch version error. I believe `bool` was introduced in pytorch v1.2.0. What is your Pytorch version?"
] | 1,574 | 1,574 | 1,574 | NONE | null | in run_lm_finetuning.py
transformers.file_utils - https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-pytorch_model.bin not found in cache or force_download set to True, downloading to .... | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1934/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1934/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1933 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1933/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1933/comments | https://api.github.com/repos/huggingface/transformers/issues/1933/events | https://github.com/huggingface/transformers/issues/1933 | 527,793,656 | MDU6SXNzdWU1Mjc3OTM2NTY= | 1,933 | Can I use HF XLNet to make a Model that Predicts Backwards? | {
"login": "Fredrum",
"id": 9632594,
"node_id": "MDQ6VXNlcjk2MzI1OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9632594?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Fredrum",
"html_url": "https://github.com/Fredrum",
"followers_url": "https://api.github.com/users/Fredrum/followers",
"following_url": "https://api.github.com/users/Fredrum/following{/other_user}",
"gists_url": "https://api.github.com/users/Fredrum/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Fredrum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Fredrum/subscriptions",
"organizations_url": "https://api.github.com/users/Fredrum/orgs",
"repos_url": "https://api.github.com/users/Fredrum/repos",
"events_url": "https://api.github.com/users/Fredrum/events{/privacy}",
"received_events_url": "https://api.github.com/users/Fredrum/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,574 | 1,580 | 1,580 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
I would like to create a model that lets me generate multi sentence text sequences but backwards, given the tail end of some input text.
Could I do that using the HF XLNet framework?
I am new to this stuff so if this is possible would you be able to please give me some general pointers on how to go go about doing this?
Grateful for any advice!
Cheers
Fred
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1933/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1933/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1932 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1932/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1932/comments | https://api.github.com/repos/huggingface/transformers/issues/1932/events | https://github.com/huggingface/transformers/issues/1932 | 527,777,463 | MDU6SXNzdWU1Mjc3Nzc0NjM= | 1,932 | Using GPT-2 XL | {
"login": "IrvDelgado",
"id": 28610164,
"node_id": "MDQ6VXNlcjI4NjEwMTY0",
"avatar_url": "https://avatars.githubusercontent.com/u/28610164?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IrvDelgado",
"html_url": "https://github.com/IrvDelgado",
"followers_url": "https://api.github.com/users/IrvDelgado/followers",
"following_url": "https://api.github.com/users/IrvDelgado/following{/other_user}",
"gists_url": "https://api.github.com/users/IrvDelgado/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IrvDelgado/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IrvDelgado/subscriptions",
"organizations_url": "https://api.github.com/users/IrvDelgado/orgs",
"repos_url": "https://api.github.com/users/IrvDelgado/repos",
"events_url": "https://api.github.com/users/IrvDelgado/events{/privacy}",
"received_events_url": "https://api.github.com/users/IrvDelgado/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You don't have to use the PyPi version of this library, but **from the source code** with `pip install git+https://github.com/huggingface/transformers.git`. This is because, at the moment, GPT2 XL version is available only in a dedicated branch called *gpt2-xl* and not in the PyPi version.\r\n\r\nMy environment is the following:\r\n- __Python__: 3.6.9\r\n- __O.S.__: Linux-4.15.0-70-generic-x86_64-with-debian-buster-sid\r\n- __Transformers__: 2.1.1 (installed from source)\r\n- __Torch__: 1.3.1\r\n\r\nAfter that, you're able to use OpenAI GPT-2 XL version as always, e.g.\r\n```\r\nimport transformers\r\nfrom transformers import GPT2Tokenizer\r\nfrom transformers import GPT2Model\r\n\r\ntokenizer = GPT2Tokenizer.from_pretrained('gpt2-xl')\r\nmodel = GPT2Model.from_pretrained('gpt2-xl')\r\n...\r\n```\r\n\r\nPlease, close this issue!\r\n\r\n> Hi, Im trying to use the pretrained gpt-xl\r\n> but I get the following error:\r\n> OSError: Model name 'gpt2-xl' was not found in tokenizers model name list (gpt2, gpt2-medium, gpt2-large, distilgpt2). We assumed 'gpt2-xl' was a path or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.\r\n> \r\n> Im following an example from the documentation.\r\n> \r\n> ```\r\n> 7 # Load pre-trained model tokenizer (vocabulary)\r\n> ```\r\n> \r\n> ---> 8 tokenizer = GPT2Tokenizer.from_pretrained('gpt2-xl')\r\n> 9 # Load pre-trained model (weights)\r\n> 10 model = GPT2LMHeadModel.from_pretrained('gpt2-xl')\r\n> Any Idea why?\r\n> \r\n> Thank you. :)"
] | 1,574 | 1,574 | 1,574 | NONE | null | Hi, Im trying to use the pretrained gpt-xl
but I get the following error:
OSError: Model name 'gpt2-xl' was not found in tokenizers model name list (gpt2, gpt2-medium, gpt2-large, distilgpt2). We assumed 'gpt2-xl' was a path or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.
Im following an example from the documentation.
7 # Load pre-trained model tokenizer (vocabulary)
---> 8 tokenizer = GPT2Tokenizer.from_pretrained('gpt2-xl')
9 # Load pre-trained model (weights)
10 model = GPT2LMHeadModel.from_pretrained('gpt2-xl')
Any Idea why?
Thank you. :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1932/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1932/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1931 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1931/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1931/comments | https://api.github.com/repos/huggingface/transformers/issues/1931/events | https://github.com/huggingface/transformers/issues/1931 | 527,655,162 | MDU6SXNzdWU1Mjc2NTUxNjI= | 1,931 | Using model output by transformers (v2.0) in older versions (0.4.0 or 1.0.0) | {
"login": "Genius1237",
"id": 15867363,
"node_id": "MDQ6VXNlcjE1ODY3MzYz",
"avatar_url": "https://avatars.githubusercontent.com/u/15867363?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Genius1237",
"html_url": "https://github.com/Genius1237",
"followers_url": "https://api.github.com/users/Genius1237/followers",
"following_url": "https://api.github.com/users/Genius1237/following{/other_user}",
"gists_url": "https://api.github.com/users/Genius1237/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Genius1237/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Genius1237/subscriptions",
"organizations_url": "https://api.github.com/users/Genius1237/orgs",
"repos_url": "https://api.github.com/users/Genius1237/repos",
"events_url": "https://api.github.com/users/Genius1237/events{/privacy}",
"received_events_url": "https://api.github.com/users/Genius1237/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Models' architectures should have stayed relatively the same between versions. If you did not get warnings telling you that some layers had not been loaded, then you should be good to go!\r\n\r\nYou could try and compare inferences between two environments which have different versions of the library installed to make sure that they're the same.",
"Well, I don't get any warnings, so that is good.\r\n\r\nCould someone comment on `bert_config.json` vs `config.json`. Has there been any change in the naming scheme with the newer versions?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,574 | 1,580 | 1,580 | CONTRIBUTOR | null | ## ❓ Questions & Help
I have a bert model finetuned on in-domain data on the latest version of the package (`2.0`). I would now like to use this in some code that is written with the older version of the package (say `0.4.0` or `1.0.0`). Would this be possible?
I tried pointing the code which imports version `0.4.0` to the model output and it gave an error saying `bert_config.json` not found, but there was a `config.json` in the model folder. I renamed `config.json` file and made the code run again and it seems to run.
Am I on the right track? Is this all I have to do to get it run?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1931/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1931/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1930 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1930/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1930/comments | https://api.github.com/repos/huggingface/transformers/issues/1930/events | https://github.com/huggingface/transformers/issues/1930 | 527,637,303 | MDU6SXNzdWU1Mjc2MzczMDM= | 1,930 | BERT bertviz | {
"login": "RuiPChaves",
"id": 33401801,
"node_id": "MDQ6VXNlcjMzNDAxODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/33401801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RuiPChaves",
"html_url": "https://github.com/RuiPChaves",
"followers_url": "https://api.github.com/users/RuiPChaves/followers",
"following_url": "https://api.github.com/users/RuiPChaves/following{/other_user}",
"gists_url": "https://api.github.com/users/RuiPChaves/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RuiPChaves/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RuiPChaves/subscriptions",
"organizations_url": "https://api.github.com/users/RuiPChaves/orgs",
"repos_url": "https://api.github.com/users/RuiPChaves/repos",
"events_url": "https://api.github.com/users/RuiPChaves/events{/privacy}",
"received_events_url": "https://api.github.com/users/RuiPChaves/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,574 | 1,574 | 1,574 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1930/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1930/timeline | completed | null | null |
|
https://api.github.com/repos/huggingface/transformers/issues/1929 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1929/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1929/comments | https://api.github.com/repos/huggingface/transformers/issues/1929/events | https://github.com/huggingface/transformers/issues/1929 | 527,624,336 | MDU6SXNzdWU1Mjc2MjQzMzY= | 1,929 | configuration of the optimizer | {
"login": "antgr",
"id": 2175768,
"node_id": "MDQ6VXNlcjIxNzU3Njg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2175768?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/antgr",
"html_url": "https://github.com/antgr",
"followers_url": "https://api.github.com/users/antgr/followers",
"following_url": "https://api.github.com/users/antgr/following{/other_user}",
"gists_url": "https://api.github.com/users/antgr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/antgr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/antgr/subscriptions",
"organizations_url": "https://api.github.com/users/antgr/orgs",
"repos_url": "https://api.github.com/users/antgr/repos",
"events_url": "https://api.github.com/users/antgr/events{/privacy}",
"received_events_url": "https://api.github.com/users/antgr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,574 | 1,580 | 1,580 | NONE | null | ## 📚 Migration
<!-- Important information -->
Model I am using (Bert, XLNet....):
Bert
Language I am using the model on (English, Chinese....):
English
The problem arise when using:
* [ ] the official example scripts: (give details)
* [x] my own modified scripts: (give details)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details)
Sequence labeling task
Details of the issue:
So, what is the issue:
I have a code that works with ```pytorch-pretrained-bert==0.4.0```
with the following setup for the optimizer:
```
FULL_FINETUNING = True
if FULL_FINETUNING:
param_optimizer = list(model.named_parameters())
no_decay = ['bias', 'gamma', 'beta']
optimizer_grouped_parameters = [
{'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)],
'weight_decay_rate': 0.01},
{'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)],
'weight_decay_rate': 0.0}
]
else:
param_optimizer = list(model.classifier.named_parameters())
optimizer_grouped_parameters = [{"params": [p for n, p in param_optimizer]}]
optimizer = Adam(optimizer_grouped_parameters, lr=3e-5)
```
With this configuration I have an f1-score near to 68% from the beginning.
But with transformers, and migrating to something like (taken from documentation):
```
um_training_steps = 1000
num_warmup_steps = 100
warmup_proportion = float(num_warmup_steps) / float(num_training_steps) # 0.1
#optimizer = AdamW(model.parameters(), lr=lr, correct_bias=False) # To reproduce BertAdam specific behavior set correct_bias=False
scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=num_warmup_steps, num_training_steps=num_training_steps)
```
I stay in an f1-score near to 14%.
How can I simulate the former functionality to get back to to the better f1-score?
The changes between the two versions are
```
> num_training_steps = 1000
> num_warmup_steps = 100
> warmup_proportion = float(num_warmup_steps) / float(num_training_steps) # 0.1
> #optimizer = AdamW(model.parameters(), lr=lr, correct_bias=False) # To reproduce BertAdam specific behavior set correct_bias=False
> scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=num_warmup_steps, num_training_steps=num_training_steps)
>
181c199
< epochs = 50
---
> epochs = 10
195c213
< attention_mask=b_input_mask, labels=b_labels)
---
> attention_mask=b_input_mask, labels=b_labels)[0]
204a223,224
> #optimizer.step()
> torch.nn.utils.clip_grad_norm_(model.parameters(), max_grad_norm) # Gradient clipping is not in AdamW anymore (so you can use amp without issue)
205a226,227
> scheduler.step()
>
220c242
< attention_mask=b_input_mask, labels=b_labels)
---
> attention_mask=b_input_mask, labels=b_labels)[0]
222c244
< attention_mask=b_input_mask)
---
> attention_mask=b_input_mask)[0]
276d297
```
Also in the documentation https://huggingface.co/transformers/migration.html
suggests the following order
```
scheduler.step()
optimizer.step()
```
but that raises a warning from latest version of pytorch which wants the opposite order.
<!-- A clear and concise description of the migration issue. If you have code snippets, please provide it here as well. -->
## Environment
* OS:
* Python version: 3.6
* PyTorch version: 1.3.1
* PyTorch Transformers version (or branch): latest
* Using GPU ? yes
* Distributed of parallel setup ? No
* Any other relevant information:
## Checklist
- [x] I have read the migration guide in the readme.
- [ ] I checked if a related official extension example runs on my machine.
<!-- Add any other context about the problem here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1929/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1929/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1928 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1928/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1928/comments | https://api.github.com/repos/huggingface/transformers/issues/1928/events | https://github.com/huggingface/transformers/pull/1928 | 527,577,667 | MDExOlB1bGxSZXF1ZXN0MzQ0ODIyNjY1 | 1,928 | Split on punc should receive never_split list | {
"login": "eisenjulian",
"id": 7776575,
"node_id": "MDQ6VXNlcjc3NzY1NzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7776575?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eisenjulian",
"html_url": "https://github.com/eisenjulian",
"followers_url": "https://api.github.com/users/eisenjulian/followers",
"following_url": "https://api.github.com/users/eisenjulian/following{/other_user}",
"gists_url": "https://api.github.com/users/eisenjulian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eisenjulian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eisenjulian/subscriptions",
"organizations_url": "https://api.github.com/users/eisenjulian/orgs",
"repos_url": "https://api.github.com/users/eisenjulian/repos",
"events_url": "https://api.github.com/users/eisenjulian/events{/privacy}",
"received_events_url": "https://api.github.com/users/eisenjulian/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1928?src=pr&el=h1) Report\n> Merging [#1928](https://codecov.io/gh/huggingface/transformers/pull/1928?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/176cd1ce1b337134425b426207fbe155099c18b4?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1928?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1928 +/- ##\n=======================================\n Coverage 84.04% 84.04% \n=======================================\n Files 97 97 \n Lines 14333 14333 \n=======================================\n Hits 12046 12046 \n Misses 2287 2287\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1928?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/1928/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl9iZXJ0LnB5) | `95.92% <100%> (ø)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1928?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1928?src=pr&el=footer). Last update [176cd1c...35b06fa](https://codecov.io/gh/huggingface/transformers/pull/1928?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Hi, thanks for opening a PR! Could you provide an example of where this is an issue?",
"Hi @LysandreJik, thanks for the quick answer. The issue arises if you have `never_split` list that contains strings with punctuation, for example, square brackets. That means that you cannot easily append or use tokens in the vocabulary that have square brackets around them.\r\n\r\nTwo ways I can think of doing that is trying to reuse the [unusedN] tokens that come with BertTokenizer or adding new ones to the vocabulary. Something like the following:\r\n\r\n BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True, never_split=['[unused1]']).tokenize('hi how are you [unused1]')\r\n > ['hi', 'how', 'are', 'you', '[', 'unused', '##1', ']']\r\n\r\nIt's also telling that the method receives a never_split list that is never used, so it seems like it was originally meant to be used in that way.",
"Hi the `never_split` option is deprecated now (and kept for backward compatibility purposes only).\r\n\r\nTo avoid splitting a token, you should add it to the vocabulary using `tokenizer.add_tokens(['[unused1]'])`."
] | 1,574 | 1,575 | 1,575 | NONE | null | When tokenizing when a never_split token that contains any punctuation, such as [ or ] they currently get split when they shouldn't be. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1928/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1928/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1928",
"html_url": "https://github.com/huggingface/transformers/pull/1928",
"diff_url": "https://github.com/huggingface/transformers/pull/1928.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1928.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/1927 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1927/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1927/comments | https://api.github.com/repos/huggingface/transformers/issues/1927/events | https://github.com/huggingface/transformers/issues/1927 | 527,558,347 | MDU6SXNzdWU1Mjc1NTgzNDc= | 1,927 | Mask probability in run_lm_finetuning.py | {
"login": "leuchine",
"id": 3937040,
"node_id": "MDQ6VXNlcjM5MzcwNDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3937040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leuchine",
"html_url": "https://github.com/leuchine",
"followers_url": "https://api.github.com/users/leuchine/followers",
"following_url": "https://api.github.com/users/leuchine/following{/other_user}",
"gists_url": "https://api.github.com/users/leuchine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leuchine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leuchine/subscriptions",
"organizations_url": "https://api.github.com/users/leuchine/orgs",
"repos_url": "https://api.github.com/users/leuchine/repos",
"events_url": "https://api.github.com/users/leuchine/events{/privacy}",
"received_events_url": "https://api.github.com/users/leuchine/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The lines:\r\n\r\n```\r\nindices_replaced = torch.bernoulli(torch.full(labels.shape, 0.8)).bool() & masked_indices\r\n```\r\nand\r\n```\r\nindices_random = torch.bernoulli(torch.full(labels.shape, 0.5)).bool() & masked_indices & ~indices_replaced\r\n```\r\nmake it that `indices_random` has a 50% chance of being true when `indices_replaced`, which has a 80% chance of being active, is not active, which is: 100% - 80% = 20%. 50% of 20% is 10%, so the chance that indices_random is true \\*is\\* 10%",
"Thanks for the reply. Yes. You are right! I missed the ~indices_replaced when reading the code. Thanks!"
] | 1,574 | 1,574 | 1,574 | NONE | null | Hi:
I don't understand why 0.5 is used when replacing masked input tokens with a random word. I think the probability should be 0.1? Or, the positions replaced with [MASK] shall already be stripped out before using 0.5. I think it is a small bug here? Thanks!
# 80% of the time, we replace masked input tokens with tokenizer.mask_token ([MASK])
indices_replaced = torch.bernoulli(torch.full(labels.shape, 0.8)).bool() & masked_indices
inputs[indices_replaced] = tokenizer.convert_tokens_to_ids(tokenizer.mask_token)
# 10% of the time, we replace masked input tokens with random word
indices_random = torch.bernoulli(torch.full(labels.shape, 0.5)).bool() & masked_indices & ~indices_replaced
random_words = torch.randint(len(tokenizer), labels.shape, dtype=torch.long)
inputs[indices_random] = random_words[indices_random]
# The rest of the time (10% of the time) we keep the masked input tokens unchanged
return inputs, labels
Best Regards,
Qi
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1927/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1927/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1926 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1926/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1926/comments | https://api.github.com/repos/huggingface/transformers/issues/1926/events | https://github.com/huggingface/transformers/issues/1926 | 527,552,342 | MDU6SXNzdWU1Mjc1NTIzNDI= | 1,926 | How to process ARC dataset with HuggingFace GPT2 | {
"login": "h56cho",
"id": 52889259,
"node_id": "MDQ6VXNlcjUyODg5MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h56cho",
"html_url": "https://github.com/h56cho",
"followers_url": "https://api.github.com/users/h56cho/followers",
"following_url": "https://api.github.com/users/h56cho/following{/other_user}",
"gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h56cho/subscriptions",
"organizations_url": "https://api.github.com/users/h56cho/orgs",
"repos_url": "https://api.github.com/users/h56cho/repos",
"events_url": "https://api.github.com/users/h56cho/events{/privacy}",
"received_events_url": "https://api.github.com/users/h56cho/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,574 | 1,580 | 1,580 | NONE | null | Hello,
I am interested in processing ARC dataset with HuggingFace GPT2.
The ARC dataset (http://nlpprogress.com/english/question_answering.html) is a question answering, which contains 7,787 genuine grade-school level, multiple-choice science questions. The dataset also comes with the full corpus of texts extracted from various articles that explains various scientific concepts that can be used to solve these 7,787 multiple choice questions (i.e. this full-corpus is not in the multiple choice format; it's just a series of excerpts from various articles)
I am assuming I'd have to use the GPT2DoubleHeadsModel to process this ARC dataset, since it is a set of multiple-choice questions. However, I also need to somehow train my GPT2DoubleHeadsModel based on the contents of the full corpus of texts that contains excerpts from various scientific articles, since GPT2DoubleHeadsModel wouldn't have acquired any scientific knowledge prior to processing this dataset.
But the thing is, the corpus of series of scientific articles on which I am interested in training my GPT2DoubleHeadsModel with is not written in a multiple-choice format -- is it possible to train only those parts of the GPT2DoubleHeadsModel that are responsible for language modelling with the series of scientific articles, and then fine-tune the entire component of the GPT2DoubleHeadsModel with the training data from the multiple choice questions?
If it is possible, how can I do it?
Thank you, | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1926/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1926/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1925 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1925/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1925/comments | https://api.github.com/repos/huggingface/transformers/issues/1925/events | https://github.com/huggingface/transformers/issues/1925 | 527,529,567 | MDU6SXNzdWU1Mjc1Mjk1Njc= | 1,925 | Need a Restore training mechenisim in run_lm_finetuning.py | {
"login": "chuanmingliu",
"id": 1910024,
"node_id": "MDQ6VXNlcjE5MTAwMjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1910024?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chuanmingliu",
"html_url": "https://github.com/chuanmingliu",
"followers_url": "https://api.github.com/users/chuanmingliu/followers",
"following_url": "https://api.github.com/users/chuanmingliu/following{/other_user}",
"gists_url": "https://api.github.com/users/chuanmingliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chuanmingliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chuanmingliu/subscriptions",
"organizations_url": "https://api.github.com/users/chuanmingliu/orgs",
"repos_url": "https://api.github.com/users/chuanmingliu/repos",
"events_url": "https://api.github.com/users/chuanmingliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/chuanmingliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"If you want to resume training with the same learning rate, you can save the scheduler and optimizer and reload them when resuming training.\r\n\r\nFor example, you could save the current training state with:\r\n\r\n```python\r\n\r\n# Save the model and tokenizer\r\nmodel.save_pretrained('./checkpoints/')\r\ntokenizer.save_pretrained('./checkpoints/')\r\n\r\n# Save the optimizer and scheduler\r\ntorch.save(optimizer.state_dict(), './checkpoints/optimizer.pt')\r\ntorch.save(scheduler.state_dict(), './checkpoints/scheduler.pt')\r\n```\r\n\r\nAnd resume training with:\r\n\r\n```python\r\n# Initialize model and tokenizer from checkpoints dir\r\nmodel = BertModel.from_pretrained('./checkpoints/')\r\ntokenizer = BertTokenizer.from_pretrained('./checkpoints/')\r\n\r\n# Load optimizer and scheduler state\r\noptimizer.load_state_dict(torch.load('./checkpoints/optimizer.pt'))\r\nscheduler.load_state_dict(torch.load('./checkpoints/scheduler.pt'))\r\n```\r\n\r\nIf you want more information, take a look at #839 and Pytorch's model serialization [tutorial](https://pytorch.org/tutorials/beginner/saving_loading_models.html)\r\n\r\nIf you want to resume training at the exact epoch and batch where you left off, like this [person](https://github.com/huggingface/transformers/issues/839#issuecomment-515129371), you could save the epoch and batch number as well and `continue` all iterations until you reach the correct batch",
"@bkkaggle Thanks for your reply, it really helps a lot!\r\n\r\nThank you!",
"@bkkaggle \r\nHowever, the reasons that I change to PyTorch (Transformers by huggingface) are easy to use and thousands more positive ones.\r\n\r\n> Why not adding an universal functionality to smoothly support this feature, like TF checkpoint does?\r\n\r\nI think that is a natural way to save checkpoint when training.\r\n\r\nIt sounds more troublesome to customize the checkpoint style by users themselves, considering the high-level encapsulation characteristic brought by the framework."
] | 1,574 | 1,574 | 1,574 | NONE | null | ## 🚀 Feature
<!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. -->
## Motivation
When training run_lm_finetuning.py for a long time, a restore training feature should be added.
Otherwise, states of sheduler and optimizer are changed when restart.
For example, when it breaks at step checkpoint-30000, it will restart at step 0 with initial learning rate and other configs. This is really troublesome.
Thanks, please.
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. -->
## Additional context
<!-- Add any other context or screenshots about the feature request here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1925/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1925/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1924 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1924/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1924/comments | https://api.github.com/repos/huggingface/transformers/issues/1924/events | https://github.com/huggingface/transformers/issues/1924 | 527,526,534 | MDU6SXNzdWU1Mjc1MjY1MzQ= | 1,924 | TypeError: convert_examples_to_features() got an unexpected keyword argument 'sequence_a_is_doc' | {
"login": "snijesh",
"id": 25811390,
"node_id": "MDQ6VXNlcjI1ODExMzkw",
"avatar_url": "https://avatars.githubusercontent.com/u/25811390?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/snijesh",
"html_url": "https://github.com/snijesh",
"followers_url": "https://api.github.com/users/snijesh/followers",
"following_url": "https://api.github.com/users/snijesh/following{/other_user}",
"gists_url": "https://api.github.com/users/snijesh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/snijesh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/snijesh/subscriptions",
"organizations_url": "https://api.github.com/users/snijesh/orgs",
"repos_url": "https://api.github.com/users/snijesh/repos",
"events_url": "https://api.github.com/users/snijesh/events{/privacy}",
"received_events_url": "https://api.github.com/users/snijesh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Please, try to install Transformers library **from source code**, and **not from PyPi**. The former one is the up-to-date version. In fact, if you see in [utils_squad.py](https://github.com/huggingface/transformers/blob/master/examples/utils_squad.py) at row 197, there is a parameter called `sequence_a_is_doc` in the definition of the `convert_examples_to_features()` method. Try it out and keep us updated on this problem!\r\n\r\n> ## Bug\r\n> Model I am using (Bert):\r\n> \r\n> Language I am using the model on (English):\r\n> \r\n> The problem arise when using:\r\n> \r\n> ```\r\n> !python run_squad.py \\\r\n> --model_type bert \\\r\n> --model_name_or_path bert-large-uncased \\\r\n> --do_train \\\r\n> --do_eval \\\r\n> --do_lower_case \\\r\n> --train_file train-v1.1.json \\\r\n> --predict_file dev-v1.1.json \\\r\n> --per_gpu_train_batch_size 12 \\\r\n> --learning_rate 3e-5 \\\r\n> --num_train_epochs 2.0 \\\r\n> --max_seq_length 384 \\\r\n> --doc_stride 128 \\\r\n> --output_dir /tmp/debug_squad/\r\n> ```\r\n> \r\n> The tasks I am working on is:\r\n> \r\n> * [ ] an official GLUE/SQUaD task: Fine-tuning on SQuAD\r\n> \r\n> ## Environment\r\n> * OS:\r\n> * Python version: 3.6\r\n> * PyTorch version: 1.3.1'\r\n> * PyTorch Transformers version (or branch): 2.1.1\r\n> * Using GPU ? Yes",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,574 | 1,580 | 1,580 | NONE | null | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert):
Language I am using the model on (English):
The problem arise when using:
```
!python run_squad.py \
--model_type bert \
--model_name_or_path bert-large-uncased \
--do_train \
--do_eval \
--do_lower_case \
--train_file train-v1.1.json \
--predict_file dev-v1.1.json \
--per_gpu_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2.0 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/debug_squad/
```
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: Fine-tuning on SQuAD
## Environment
* OS:
* Python version: 3.6
* PyTorch version: 1.3.1'
* PyTorch Transformers version (or branch): 2.1.1
* Using GPU ? Yes
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1924/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1924/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1923 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1923/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1923/comments | https://api.github.com/repos/huggingface/transformers/issues/1923/events | https://github.com/huggingface/transformers/issues/1923 | 527,523,139 | MDU6SXNzdWU1Mjc1MjMxMzk= | 1,923 | Step restarts from step 0 when reload from an existing checkpoint? | {
"login": "chuanmingliu",
"id": 1910024,
"node_id": "MDQ6VXNlcjE5MTAwMjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1910024?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chuanmingliu",
"html_url": "https://github.com/chuanmingliu",
"followers_url": "https://api.github.com/users/chuanmingliu/followers",
"following_url": "https://api.github.com/users/chuanmingliu/following{/other_user}",
"gists_url": "https://api.github.com/users/chuanmingliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chuanmingliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chuanmingliu/subscriptions",
"organizations_url": "https://api.github.com/users/chuanmingliu/orgs",
"repos_url": "https://api.github.com/users/chuanmingliu/repos",
"events_url": "https://api.github.com/users/chuanmingliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/chuanmingliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,574 | 1,574 | 1,574 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
Hi, everyone
I am totally new to Transformers, it's really a good solution. :-)
Question:
When I reload my program from a break point (existing checkpoint, say checkpoint-30000),
what I expect is that, from Tensorboard, I can see the program restarting from 30001.
However, it restarts from step 0, although parameters are up to date.
Does any config point to this problem?
Or any easy solutions?
Thanks.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1923/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1923/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1922 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1922/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1922/comments | https://api.github.com/repos/huggingface/transformers/issues/1922/events | https://github.com/huggingface/transformers/pull/1922 | 527,486,830 | MDExOlB1bGxSZXF1ZXN0MzQ0NzU4NTg5 | 1,922 | update | {
"login": "maxmatical",
"id": 8890262,
"node_id": "MDQ6VXNlcjg4OTAyNjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8890262?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maxmatical",
"html_url": "https://github.com/maxmatical",
"followers_url": "https://api.github.com/users/maxmatical/followers",
"following_url": "https://api.github.com/users/maxmatical/following{/other_user}",
"gists_url": "https://api.github.com/users/maxmatical/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maxmatical/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maxmatical/subscriptions",
"organizations_url": "https://api.github.com/users/maxmatical/orgs",
"repos_url": "https://api.github.com/users/maxmatical/repos",
"events_url": "https://api.github.com/users/maxmatical/events{/privacy}",
"received_events_url": "https://api.github.com/users/maxmatical/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1922?src=pr&el=h1) Report\n> Merging [#1922](https://codecov.io/gh/huggingface/transformers/pull/1922?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/26db31e0c09a8b5e1ca7a61c454b159eab9d86be?src=pr&el=desc) will **decrease** coverage by `<.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1922?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1922 +/- ##\n==========================================\n- Coverage 84.04% 84.03% -0.01% \n==========================================\n Files 97 94 -3 \n Lines 14333 14032 -301 \n==========================================\n- Hits 12046 11792 -254 \n+ Misses 2287 2240 -47\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1922?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1922/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl91dGlscy5weQ==) | `91.2% <0%> (-0.95%)` | :arrow_down: |\n| [transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/1922/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fYXV0by5weQ==) | `58.82% <0%> (-0.64%)` | :arrow_down: |\n| [transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1922/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3JvYmVydGEucHk=) | `89.9% <0%> (-0.53%)` | :arrow_down: |\n| [transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/1922/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3hsbmV0LnB5) | `87.82% <0%> (-0.37%)` | :arrow_down: |\n| [transformers/modeling\\_tf\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/1922/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3RyYW5zZm9feGwucHk=) | `92.21% <0%> (-0.33%)` | :arrow_down: |\n| [transformers/tests/modeling\\_tf\\_common\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1922/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2NvbW1vbl90ZXN0LnB5) | `96.8% <0%> (-0.28%)` | :arrow_down: |\n| [transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1922/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3V0aWxzLnB5) | `92.4% <0%> (-0.28%)` | :arrow_down: |\n| [transformers/modeling\\_tf\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/1922/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX2N0cmwucHk=) | `97.75% <0%> (-0.11%)` | :arrow_down: |\n| [transformers/tests/modeling\\_distilbert\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1922/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX2Rpc3RpbGJlcnRfdGVzdC5weQ==) | `99.09% <0%> (-0.09%)` | :arrow_down: |\n| [transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/1922/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3hsbS5weQ==) | `90.39% <0%> (-0.08%)` | :arrow_down: |\n| ... and [16 more](https://codecov.io/gh/huggingface/transformers/pull/1922/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1922?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1922?src=pr&el=footer). Last update [26db31e...4da7586](https://codecov.io/gh/huggingface/transformers/pull/1922?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,574 | 1,575 | 1,575 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1922/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1922/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1922",
"html_url": "https://github.com/huggingface/transformers/pull/1922",
"diff_url": "https://github.com/huggingface/transformers/pull/1922.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1922.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/1921 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1921/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1921/comments | https://api.github.com/repos/huggingface/transformers/issues/1921/events | https://github.com/huggingface/transformers/issues/1921 | 527,486,174 | MDU6SXNzdWU1Mjc0ODYxNzQ= | 1,921 | FileNotFoundError when running run_squad.py | {
"login": "maxmatical",
"id": 8890262,
"node_id": "MDQ6VXNlcjg4OTAyNjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8890262?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maxmatical",
"html_url": "https://github.com/maxmatical",
"followers_url": "https://api.github.com/users/maxmatical/followers",
"following_url": "https://api.github.com/users/maxmatical/following{/other_user}",
"gists_url": "https://api.github.com/users/maxmatical/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maxmatical/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maxmatical/subscriptions",
"organizations_url": "https://api.github.com/users/maxmatical/orgs",
"repos_url": "https://api.github.com/users/maxmatical/repos",
"events_url": "https://api.github.com/users/maxmatical/events{/privacy}",
"received_events_url": "https://api.github.com/users/maxmatical/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You need to download that json on squad website and put to your local\ndirectory zzz\n\nOn Sat, Nov 23, 2019 at 09:09 Max Tian <[email protected]> wrote:\n\n> ❓ Questions & Help\n>\n> I tried fine-tuning BERT on squad on my local computer. The script I ran\n> was\n>\n> python3 ./examples/run_squad.py \\\n>\n> --model_type bert \\\n>\n> --model_name_or_path bert-large-uncased-whole-word-masking \\\n>\n> --do_train \\\n>\n> --do_eval \\\n>\n> --do_lower_case \\\n>\n> --train_file $SQUAD_DIR/train-v1.1.json \\\n>\n> --predict_file $SQUAD_DIR/dev-v1.1.json \\\n>\n> --learning_rate 3e-5 \\\n>\n> --num_train_epochs 2 \\\n>\n> --max_seq_length 384 \\\n>\n> --doc_stride 128 \\\n>\n> --output_dir ../models/wwm_uncased_finetuned_squad/ \\\n>\n> --per_gpu_eval_batch_size=3 \\\n>\n> --per_gpu_train_batch_size=3 \\\n>\n>\n> But I get an error with regards to the train-v1.1.json not being found.\n> The full output is\n>\n> I1122 20:03:40.218862 4637015488 tokenization_utils.py:375] loading file https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-vocab.txt from cache at /Users/maxtian/.cache/torch/transformers/b3a6b2c6d7ea2ffa06d0e7577c1e88b94fad470ae0f060a4ffef3fe0bdf86730.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084\n>\n> I1122 20:03:40.596048 4637015488 modeling_utils.py:383] loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-pytorch_model.bin from cache at /Users/maxtian/.cache/torch/transformers/66cc7a7501e3499efedc37e47b3a613e0d3d8d0a51c66224c69f0c669b52dcfb.ae11cc7f2a26b857b76b404a908c7abad793f88bf8ad95caecff154da87994b1\n>\n> I1122 20:03:54.460903 4637015488 modeling_utils.py:453] Weights of BertForQuestionAnswering not initialized from pretrained model: ['qa_outputs.weight', 'qa_outputs.bias']\n>\n> I1122 20:03:54.461247 4637015488 modeling_utils.py:456] Weights from pretrained model not used in BertForQuestionAnswering: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias']\n>\n> I1122 20:03:54.473404 4637015488 run_squad.py:504] Training/evaluation parameters Namespace(adam_epsilon=1e-08, cache_dir='', config_name='', device=device(type='cpu'), do_eval=True, do_lower_case=True, do_train=True, doc_stride=128, eval_all_checkpoints=False, evaluate_during_training=False, fp16=False, fp16_opt_level='O1', gradient_accumulation_steps=1, learning_rate=3e-05, local_rank=-1, logging_steps=50, max_answer_length=30, max_grad_norm=1.0, max_query_length=64, max_seq_length=384, max_steps=-1, model_name_or_path='bert-large-uncased-whole-word-masking', model_type='bert', n_best_size=20, n_gpu=0, no_cuda=False, null_score_diff_threshold=0.0, num_train_epochs=2.0, output_dir='../models/wwm_uncased_finetuned_squad/', overwrite_cache=False, overwrite_output_dir=False, per_gpu_eval_batch_size=3, per_gpu_train_batch_size=3, predict_file='/dev-v1.1.json', save_steps=50, seed=42, server_ip='', server_port='', tokenizer_name='', train_file='/train-v1.1.json', verbose_logging=False, version_2_with_negative=False, warmup_steps=0, weight_decay=0.0)\n>\n> I1122 20:03:54.474577 4637015488 run_squad.py:308] Creating features from dataset file at /train-v1.1.json\n>\n>\n>\n>\n> And I get the following error\n>\n> Traceback (most recent call last):\n>\n> File \"./examples/run_squad.py\", line 573, in <module>\n>\n> main()\n>\n> File \"./examples/run_squad.py\", line 518, in main\n>\n> train_dataset = load_and_cache_examples(args, tokenizer, evaluate=False, output_examples=False)\n>\n> File \"./examples/run_squad.py\", line 311, in load_and_cache_examples\n>\n> version_2_with_negative=args.version_2_with_negative)\n>\n> File \"/Users/maxtian/Desktop/Python_Projects/transformers/examples/utils_squad.py\", line 114, in read_squad_examples\n>\n> with open(input_file, \"r\", encoding='utf-8') as reader:\n>\n> FileNotFoundError: [Errno 2] No such file or directory: '/train-v1.1.json'\n>\n>\n> —\n> You are receiving this because you are subscribed to this thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/1921?email_source=notifications&email_token=AIEAE4HBKLYKDQWTFUTKO3TQVB7EZA5CNFSM4JQXY37KYY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4H3QZTPA>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AIEAE4ANIT2HJAZA5JSGH2LQVB7EZANCNFSM4JQXY37A>\n> .\n>\n",
"oh my mistake. i thought the json files are already in the repo"
] | 1,574 | 1,574 | 1,574 | NONE | null | ## ❓ Questions & Help
I tried fine-tuning BERT on squad on my local computer. The script I ran was
```
python3 ./examples/run_squad.py \
--model_type bert \
--model_name_or_path bert-large-uncased-whole-word-masking \
--do_train \
--do_eval \
--do_lower_case \
--train_file $SQUAD_DIR/train-v1.1.json \
--predict_file $SQUAD_DIR/dev-v1.1.json \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir ../models/wwm_uncased_finetuned_squad/ \
--per_gpu_eval_batch_size=3 \
--per_gpu_train_batch_size=3 \
```
But I get an error with regards to the `train-v1.1.json` not being found. The full output is
```
I1122 20:03:40.218862 4637015488 tokenization_utils.py:375] loading file https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-vocab.txt from cache at /Users/maxtian/.cache/torch/transformers/b3a6b2c6d7ea2ffa06d0e7577c1e88b94fad470ae0f060a4ffef3fe0bdf86730.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084
I1122 20:03:40.596048 4637015488 modeling_utils.py:383] loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-pytorch_model.bin from cache at /Users/maxtian/.cache/torch/transformers/66cc7a7501e3499efedc37e47b3a613e0d3d8d0a51c66224c69f0c669b52dcfb.ae11cc7f2a26b857b76b404a908c7abad793f88bf8ad95caecff154da87994b1
I1122 20:03:54.460903 4637015488 modeling_utils.py:453] Weights of BertForQuestionAnswering not initialized from pretrained model: ['qa_outputs.weight', 'qa_outputs.bias']
I1122 20:03:54.461247 4637015488 modeling_utils.py:456] Weights from pretrained model not used in BertForQuestionAnswering: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias']
I1122 20:03:54.473404 4637015488 run_squad.py:504] Training/evaluation parameters Namespace(adam_epsilon=1e-08, cache_dir='', config_name='', device=device(type='cpu'), do_eval=True, do_lower_case=True, do_train=True, doc_stride=128, eval_all_checkpoints=False, evaluate_during_training=False, fp16=False, fp16_opt_level='O1', gradient_accumulation_steps=1, learning_rate=3e-05, local_rank=-1, logging_steps=50, max_answer_length=30, max_grad_norm=1.0, max_query_length=64, max_seq_length=384, max_steps=-1, model_name_or_path='bert-large-uncased-whole-word-masking', model_type='bert', n_best_size=20, n_gpu=0, no_cuda=False, null_score_diff_threshold=0.0, num_train_epochs=2.0, output_dir='../models/wwm_uncased_finetuned_squad/', overwrite_cache=False, overwrite_output_dir=False, per_gpu_eval_batch_size=3, per_gpu_train_batch_size=3, predict_file='/dev-v1.1.json', save_steps=50, seed=42, server_ip='', server_port='', tokenizer_name='', train_file='/train-v1.1.json', verbose_logging=False, version_2_with_negative=False, warmup_steps=0, weight_decay=0.0)
I1122 20:03:54.474577 4637015488 run_squad.py:308] Creating features from dataset file at /train-v1.1.json
```
And I get the following error
```
Traceback (most recent call last):
File "./examples/run_squad.py", line 573, in <module>
main()
File "./examples/run_squad.py", line 518, in main
train_dataset = load_and_cache_examples(args, tokenizer, evaluate=False, output_examples=False)
File "./examples/run_squad.py", line 311, in load_and_cache_examples
version_2_with_negative=args.version_2_with_negative)
File "/Users/maxtian/Desktop/Python_Projects/transformers/examples/utils_squad.py", line 114, in read_squad_examples
with open(input_file, "r", encoding='utf-8') as reader:
FileNotFoundError: [Errno 2] No such file or directory: '/train-v1.1.json'
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1921/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1921/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1920 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1920/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1920/comments | https://api.github.com/repos/huggingface/transformers/issues/1920/events | https://github.com/huggingface/transformers/issues/1920 | 527,404,160 | MDU6SXNzdWU1Mjc0MDQxNjA= | 1,920 | CTRLTokenizer not consistent with the fastBPE tokenizer used in Salesforce/CTRL | {
"login": "orenmelamud",
"id": 55256832,
"node_id": "MDQ6VXNlcjU1MjU2ODMy",
"avatar_url": "https://avatars.githubusercontent.com/u/55256832?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/orenmelamud",
"html_url": "https://github.com/orenmelamud",
"followers_url": "https://api.github.com/users/orenmelamud/followers",
"following_url": "https://api.github.com/users/orenmelamud/following{/other_user}",
"gists_url": "https://api.github.com/users/orenmelamud/gists{/gist_id}",
"starred_url": "https://api.github.com/users/orenmelamud/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/orenmelamud/subscriptions",
"organizations_url": "https://api.github.com/users/orenmelamud/orgs",
"repos_url": "https://api.github.com/users/orenmelamud/repos",
"events_url": "https://api.github.com/users/orenmelamud/events{/privacy}",
"received_events_url": "https://api.github.com/users/orenmelamud/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Indeed, thanks a lot for the bug report.\r\nThis is fixed on master (by using regex to split instead of white spaces)."
] | 1,574 | 1,575 | 1,575 | NONE | null | ## 🐛 Bug
cc @keskarnitish
<!-- Important information -->
I am using the transformers CTRL re-implementation to fine-tune the original pre-trained model released by Salesforce https://github.com/salesforce/ctrl.
When the input text consists of newline characters, the tokenization of the Transformers tokenizer differs from the one used by Salesforce CTRL.
## To Reproduce
Salesforce tokenization:
```
import fastBPE
import re
bpe = fastBPE.fastBPE('codes', 'vocab')
line = 'This is one sentence.\nAnd this is another sentence!\n'
tokenized_line = bpe.apply([line])[0]
tokenized_line = re.findall(r'\S+|\n', tokenized_line)
toks = list(filter(lambda x: x != u'@@', tokenized_line))
print(toks)
['This', 'is', 'one', 'sentenc@@', 'e.@@', '\n', 'And', 'this', 'is', 'another', 'sent@@', 'ence@@', '!@@', '\n']
```
Transformers tokenization:
```
from transformers import CTRLTokenizer
tokenizer = CTRLTokenizer.from_pretrained('ctrl', do_lower_case=False)
toks = tokenizer.tokenize(line)
print(toks)
['This', 'is', 'one', 'sentenc@@', 'e.@@', '\n@@', 'And', 'this', 'is', 'another', 'sent@@', 'ence!']
```
Also, I get this issue with double space in the input text:
```
line = 'And also a problem with more than one consecutive space'
tokenized_line = tokenizer.tokenize(line)
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-37-747954838180> in <module>
1 line = 'And also a problem with more than one consecutive space'
----> 2 tokenized_line = tokenizer.tokenize(line)
~/anaconda3/envs/hugging/lib/python3.7/site-packages/transformers/tokenization_utils.py in tokenize(self, text, **kwargs)
647
648 added_tokens = list(self.added_tokens_encoder.keys()) + self.all_special_tokens
--> 649 tokenized_text = split_on_tokens(added_tokens, text)
650 return tokenized_text
651
~/anaconda3/envs/hugging/lib/python3.7/site-packages/transformers/tokenization_utils.py in split_on_tokens(tok_list, text)
644 return sum((self._tokenize(token, **kwargs) if token not \
645 in self.added_tokens_encoder and token not in self.all_special_tokens \
--> 646 else [token] for token in tokenized_text), [])
647
648 added_tokens = list(self.added_tokens_encoder.keys()) + self.all_special_tokens
~/anaconda3/envs/hugging/lib/python3.7/site-packages/transformers/tokenization_utils.py in <genexpr>(.0)
644 return sum((self._tokenize(token, **kwargs) if token not \
645 in self.added_tokens_encoder and token not in self.all_special_tokens \
--> 646 else [token] for token in tokenized_text), [])
647
648 added_tokens = list(self.added_tokens_encoder.keys()) + self.all_special_tokens
~/anaconda3/envs/hugging/lib/python3.7/site-packages/transformers/tokenization_ctrl.py in _tokenize(self, text)
141
142 for token in text:
--> 143 split_tokens.extend([t for t in self.bpe(token).split(' ')])
144 return split_tokens
145
~/anaconda3/envs/hugging/lib/python3.7/site-packages/transformers/tokenization_ctrl.py in bpe(self, token)
94 return self.cache[token]
95 word = tuple(token)
---> 96 word = tuple(list(word[:-1]) + [word[-1]+'</w>'])
97 pairs = get_pairs(word)
98
## Expected behavior
I expect the tokenizers to output identical tokenizations so that fine-tuning is consistent with pre-training.
I expect the tokenizer to handle double spaces.
## Environment
* OS: Linux
* Python version: 3.7.4
* PyTorch version: 1.3.0
* PyTorch Transformers version (or branch): 2.1.1
* Using GPU ?
* Distributed of parallel setup ?
* Any other relevant information:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1920/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/1920/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1919 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1919/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1919/comments | https://api.github.com/repos/huggingface/transformers/issues/1919/events | https://github.com/huggingface/transformers/pull/1919 | 527,386,971 | MDExOlB1bGxSZXF1ZXN0MzQ0Njc2MDM0 | 1,919 | Fix typo in documentation. toto -> to | {
"login": "CrafterKolyan",
"id": 9883873,
"node_id": "MDQ6VXNlcjk4ODM4NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9883873?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CrafterKolyan",
"html_url": "https://github.com/CrafterKolyan",
"followers_url": "https://api.github.com/users/CrafterKolyan/followers",
"following_url": "https://api.github.com/users/CrafterKolyan/following{/other_user}",
"gists_url": "https://api.github.com/users/CrafterKolyan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CrafterKolyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CrafterKolyan/subscriptions",
"organizations_url": "https://api.github.com/users/CrafterKolyan/orgs",
"repos_url": "https://api.github.com/users/CrafterKolyan/repos",
"events_url": "https://api.github.com/users/CrafterKolyan/events{/privacy}",
"received_events_url": "https://api.github.com/users/CrafterKolyan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"thank you!"
] | 1,574 | 1,574 | 1,574 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1919/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1919/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1919",
"html_url": "https://github.com/huggingface/transformers/pull/1919",
"diff_url": "https://github.com/huggingface/transformers/pull/1919.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1919.patch",
"merged_at": 1574524517000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/1918 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1918/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1918/comments | https://api.github.com/repos/huggingface/transformers/issues/1918/events | https://github.com/huggingface/transformers/pull/1918 | 527,351,610 | MDExOlB1bGxSZXF1ZXN0MzQ0NjQ2Njk3 | 1,918 | Minor bug fixes on run_ner.py | {
"login": "manansanghi",
"id": 52307004,
"node_id": "MDQ6VXNlcjUyMzA3MDA0",
"avatar_url": "https://avatars.githubusercontent.com/u/52307004?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manansanghi",
"html_url": "https://github.com/manansanghi",
"followers_url": "https://api.github.com/users/manansanghi/followers",
"following_url": "https://api.github.com/users/manansanghi/following{/other_user}",
"gists_url": "https://api.github.com/users/manansanghi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manansanghi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manansanghi/subscriptions",
"organizations_url": "https://api.github.com/users/manansanghi/orgs",
"repos_url": "https://api.github.com/users/manansanghi/repos",
"events_url": "https://api.github.com/users/manansanghi/events{/privacy}",
"received_events_url": "https://api.github.com/users/manansanghi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1918?src=pr&el=h1) Report\n> Merging [#1918](https://codecov.io/gh/huggingface/transformers/pull/1918?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/26db31e0c09a8b5e1ca7a61c454b159eab9d86be?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1918?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1918 +/- ##\n=======================================\n Coverage 84.04% 84.04% \n=======================================\n Files 97 97 \n Lines 14333 14333 \n=======================================\n Hits 12046 12046 \n Misses 2287 2287\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1918?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1918?src=pr&el=footer). Last update [26db31e...17949e4](https://codecov.io/gh/huggingface/transformers/pull/1918?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
":+1: sorry, that was a pasting mistake in https://github.com/huggingface/transformers/pull/1792 🙈"
] | 1,574 | 1,574 | 1,574 | CONTRIBUTOR | null | Adding a dictionary entry outside of initialization requires an '=' instead of ':' | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1918/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1918/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1918",
"html_url": "https://github.com/huggingface/transformers/pull/1918",
"diff_url": "https://github.com/huggingface/transformers/pull/1918.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1918.patch",
"merged_at": 1574718484000
} |
https://api.github.com/repos/huggingface/transformers/issues/1917 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1917/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1917/comments | https://api.github.com/repos/huggingface/transformers/issues/1917/events | https://github.com/huggingface/transformers/issues/1917 | 527,319,982 | MDU6SXNzdWU1MjczMTk5ODI= | 1,917 | run_squad.py not running | {
"login": "maxmatical",
"id": 8890262,
"node_id": "MDQ6VXNlcjg4OTAyNjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8890262?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maxmatical",
"html_url": "https://github.com/maxmatical",
"followers_url": "https://api.github.com/users/maxmatical/followers",
"following_url": "https://api.github.com/users/maxmatical/following{/other_user}",
"gists_url": "https://api.github.com/users/maxmatical/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maxmatical/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maxmatical/subscriptions",
"organizations_url": "https://api.github.com/users/maxmatical/orgs",
"repos_url": "https://api.github.com/users/maxmatical/repos",
"events_url": "https://api.github.com/users/maxmatical/events{/privacy}",
"received_events_url": "https://api.github.com/users/maxmatical/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,574 | 1,574 | 1,574 | NONE | null | ## ❓ Questions & Help
When I try to run the script to fine-tune BERT on squad using the code from the examples:
```
python ./examples/run_squad.py \
--model_type bert \
--model_name_or_path bert-large-uncased-whole-word-masking \
--do_train \
--do_eval \
--do_lower_case \
--train_file $SQUAD_DIR/train-v1.1.json \
--predict_file $SQUAD_DIR/dev-v1.1.json \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir ../models/wwm_uncased_finetuned_squad/ \
--per_gpu_eval_batch_size=3 \
--per_gpu_train_batch_size=3 \
```
But my terminal just gets stuck on this stage
<img width="518" alt="image" src="https://user-images.githubusercontent.com/8890262/69446607-c39b3780-0d22-11ea-8bea-6135f05640da.png">
When I run the pytests, everything has passed
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1917/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1917/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1916 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1916/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1916/comments | https://api.github.com/repos/huggingface/transformers/issues/1916/events | https://github.com/huggingface/transformers/issues/1916 | 527,256,511 | MDU6SXNzdWU1MjcyNTY1MTE= | 1,916 | Truncating GPT2 past | {
"login": "LHolten",
"id": 24637999,
"node_id": "MDQ6VXNlcjI0NjM3OTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/24637999?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LHolten",
"html_url": "https://github.com/LHolten",
"followers_url": "https://api.github.com/users/LHolten/followers",
"following_url": "https://api.github.com/users/LHolten/following{/other_user}",
"gists_url": "https://api.github.com/users/LHolten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LHolten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LHolten/subscriptions",
"organizations_url": "https://api.github.com/users/LHolten/orgs",
"repos_url": "https://api.github.com/users/LHolten/repos",
"events_url": "https://api.github.com/users/LHolten/events{/privacy}",
"received_events_url": "https://api.github.com/users/LHolten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi! Your usage of `past` seems correct to me. After how many iterations do you feel the need to truncate?",
"I also would like to have an example that works correctly with GPT2. If past is not truncated, the model crashes after it reaches some threshold, which is, I suppose, `model.config.max_position_embeddings`. So, after it grows to this size, I truncate it as follows:\r\n```\r\n if past[0].shape[3] == model.config.max_position_embeddings - 1:\r\n past = [p[:, :, :, :-1, ...] for p in past]\r\n```\r\n\r\nThis is clearly broken, as the model's generation capabilities degrade dramatically after the truncation kicks in. Example (gpt2-medium):\r\n\r\n>...And here's why... for every person of difference, there's a different template when it comes to talking. We've seen them together for so long that most people don't know who we are. The owner of the room has probably hidden that while we are awake, but since then, there's usually a perception gap to gape at the outside. He's an enemy of rule, we know this. He was excommunicated and let die and our purity is worth more in terms of glory than doing battle together. He probably ran away from us, we're not sure why but we do remember his location. What kind of story is this then? In whatever and wherever he was taken, the hapless thief of light known as Arcadia forced down with the blessing of the goddess Kali from the eternities, has been returned to each of us... in as faceless a form as it's possible to<**TRUNCATION BEGINS HERE**> us we were possibly possible to informally possible to manipulateable to us. NoTa possible to some strange not always been my memories allow, unhidden possible to beholdenf the parts are, only known. Upon the wanderer. This is able to all asked and thus possible to callable to us — being made for the willed possible for them that of receiving the righteous deed has ever been in our power was when we can look of whether it needs permitted to appear plausible to those we may befitting to us you and with you can take.\r\n\r\nYou can see that it starts producing largely incoherent sentences with bad grammar. It also loses basic abilities like matching brackets and quote marks. If I truncate the first element instead, as @LHolten does, the result is even worse:\r\n>...And here's why... for every person of difference, there's a different template when it comes to talking. We've seen them together for so long that most people don't know who we are. The owner of the room has probably hidden that while we are awake, but since then, there's usually a perception gap to gape at the outside. He's an enemy of rule, we know this. He was excommunicated and let die and our purity is worth more in terms of glory than doing battle together. He probably ran away from us, we're not sure why but we do remember his location. What kind of story is this then? In whatever and wherever he was taken, the hapless thief of light known as Arcadia forced down with the blessing of the goddess Kali from the eternities, has been returned to each of us... in as faceless a form as it's possible to<**TRUNCATION BEGINS HERE**> to to to To Aud To Bed Since I January Nine Thou William July you very well, for this very purpose you can actually do without wearing/washing clothes any specific garments/tutexes, if they are Halloween- Halloween No-How-able Spells for Specific Results Splinterview February Treat him as Jess four of The H: We really dislike why's of Tactics The Neutral Generic Real Sheriff's the equivalent as Uthville He has been Henry's Gender Surprise Our Half<|endoftext|>\r\n\r\nI'm afraid the problem is more complex than it initially seemed to me. Maybe losing a single element of history is too damaging somehow? Perhaps the only correct way to deal with \"past overflow\" is to truncate the context itself, say by removing the first paragraph from it, and then regenerate the past from it?\r\nGenerally, what is the best way of doing continuous generation without <|endoftext|> tokens?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,574 | 1,581 | 1,581 | NONE | null | ## ❓ Questions & Help
I am using code like this to generate text:
```python
while True:
output_token, past = model.forward(output_token, past=past)
output_token = output_token[:, -1, :]
output_token = torch.multinomial(F.softmax(output_token, dim=-1), num_samples=1)
out = torch.cat((out, output_token), dim=1)
```
The problem with this is that `past` keeps growing.
My solution is to check the size of past and truncate it like this:
```python
if past[0].shape[-2] > max_past:
past = [p[..., -max_past:, :] for p in past]
```
I don't think this is correct, can anyone enlighten me? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1916/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/1916/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1915 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1915/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1915/comments | https://api.github.com/repos/huggingface/transformers/issues/1915/events | https://github.com/huggingface/transformers/issues/1915 | 527,222,410 | MDU6SXNzdWU1MjcyMjI0MTA= | 1,915 | Any plan to include BART and T5? | {
"login": "leuchine",
"id": 3937040,
"node_id": "MDQ6VXNlcjM5MzcwNDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3937040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leuchine",
"html_url": "https://github.com/leuchine",
"followers_url": "https://api.github.com/users/leuchine/followers",
"following_url": "https://api.github.com/users/leuchine/following{/other_user}",
"gists_url": "https://api.github.com/users/leuchine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leuchine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leuchine/subscriptions",
"organizations_url": "https://api.github.com/users/leuchine/orgs",
"repos_url": "https://api.github.com/users/leuchine/repos",
"events_url": "https://api.github.com/users/leuchine/events{/privacy}",
"received_events_url": "https://api.github.com/users/leuchine/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"please search for issues beforehand (and fill the template if there's not already a relevant issue)"
] | 1,574 | 1,574 | 1,574 | NONE | null | # 🌟New model addition
## Model description
<!-- Important information -->
## Open Source status
* [ ] the model implementation is available: (give details)
* [ ] the model weights are available: (give details)
* [ ] who are the authors: (mention them)
## Additional context
<!-- Add any other context about the problem here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1915/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1915/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1914 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1914/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1914/comments | https://api.github.com/repos/huggingface/transformers/issues/1914/events | https://github.com/huggingface/transformers/issues/1914 | 527,178,312 | MDU6SXNzdWU1MjcxNzgzMTI= | 1,914 | How to perform common sense reasoning task with GPT-2? | {
"login": "h56cho",
"id": 52889259,
"node_id": "MDQ6VXNlcjUyODg5MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h56cho",
"html_url": "https://github.com/h56cho",
"followers_url": "https://api.github.com/users/h56cho/followers",
"following_url": "https://api.github.com/users/h56cho/following{/other_user}",
"gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h56cho/subscriptions",
"organizations_url": "https://api.github.com/users/h56cho/orgs",
"repos_url": "https://api.github.com/users/h56cho/repos",
"events_url": "https://api.github.com/users/h56cho/events{/privacy}",
"received_events_url": "https://api.github.com/users/h56cho/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,574 | 1,580 | 1,580 | NONE | null | Hello,
I am new to NLP so I have lots of questions.
I am interested in carrying out common sense reasoning task with GPT-2, for example, with Winograd Schema Challenge dataset.
Q1. How should I tokenize the Winograd Schema Challenge dataset to process it with GPT-2 (with the double heads model, for instance)? Can someone please give me an example?
Q2. Can GPT2DoubleHeadsModel be used to conduct common sense reasoning task with Winograd Schema Challenge dataset?
Thank you, | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1914/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1914/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1913 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1913/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1913/comments | https://api.github.com/repos/huggingface/transformers/issues/1913/events | https://github.com/huggingface/transformers/issues/1913 | 527,098,833 | MDU6SXNzdWU1MjcwOTg4MzM= | 1,913 | Some Questions about XLNet | {
"login": "qlwang25",
"id": 38132016,
"node_id": "MDQ6VXNlcjM4MTMyMDE2",
"avatar_url": "https://avatars.githubusercontent.com/u/38132016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qlwang25",
"html_url": "https://github.com/qlwang25",
"followers_url": "https://api.github.com/users/qlwang25/followers",
"following_url": "https://api.github.com/users/qlwang25/following{/other_user}",
"gists_url": "https://api.github.com/users/qlwang25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qlwang25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qlwang25/subscriptions",
"organizations_url": "https://api.github.com/users/qlwang25/orgs",
"repos_url": "https://api.github.com/users/qlwang25/repos",
"events_url": "https://api.github.com/users/qlwang25/events{/privacy}",
"received_events_url": "https://api.github.com/users/qlwang25/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,574 | 1,580 | 1,580 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
1, XLNet is a model with relative position embeddings, so can either pad the inputs on the right or on the left ?
2, If i pad the inputs on the left, the length of input is 100 and the max length is 128, i don't consider the PAD token in NER task, i.e. just use the hidden state of input token, use [-100:, :] over the output of model ?
For length dimension, the last 100 are taken; for hidden state dimension, the values of all dimensions are taken.
3, CLS token must be at end ?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1913/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1913/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1912 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1912/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1912/comments | https://api.github.com/repos/huggingface/transformers/issues/1912/events | https://github.com/huggingface/transformers/issues/1912 | 526,977,981 | MDU6SXNzdWU1MjY5Nzc5ODE= | 1,912 | XLNet is getting slower when enabling mems | {
"login": "makcedward",
"id": 36614806,
"node_id": "MDQ6VXNlcjM2NjE0ODA2",
"avatar_url": "https://avatars.githubusercontent.com/u/36614806?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/makcedward",
"html_url": "https://github.com/makcedward",
"followers_url": "https://api.github.com/users/makcedward/followers",
"following_url": "https://api.github.com/users/makcedward/following{/other_user}",
"gists_url": "https://api.github.com/users/makcedward/gists{/gist_id}",
"starred_url": "https://api.github.com/users/makcedward/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/makcedward/subscriptions",
"organizations_url": "https://api.github.com/users/makcedward/orgs",
"repos_url": "https://api.github.com/users/makcedward/repos",
"events_url": "https://api.github.com/users/makcedward/events{/privacy}",
"received_events_url": "https://api.github.com/users/makcedward/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"When using `past` or `mems` values, you should be careful not to give the model the input ids which have already been computed. We've recently [added a documentation section ](https://huggingface.co/transformers/quickstart.html#using-the-past)detailing the use of `past`, which is similar to the way `mems` should be used. \r\n\r\nPlease notice we're only feeding the model the tokens for which the attention values have not be computed yet; which is only the last token in the case of sequential decoding.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,574 | 1,580 | 1,580 | NONE | null | Per API doc, using mems help to reduce inference time. However, I noticed that more time is needed when increasing mem_len. Do I misunderstand the usage of mem_len parameter?
Here is the testing code
```
import time
import torch
from transformers import XLNetTokenizer, XLNetLMHeadModel
tokenizer = XLNetTokenizer.from_pretrained('xlnet-base-cased')
text = """
A horrible, messy split second presents
itself to the heart-shaped version as Scott is moved. The upcoming movie benefits at
the mental cost of ages 14 to 12. Nothing substantial is happened for almost 48 days.
When that happens, we lose our heart. <eod> The quick brown fox jumps over the lazy dog. <mask>
"""
mems = None
input_ids = tokenizer.encode(text)
input_ids = torch.tensor(input_ids).unsqueeze(0)
epoch = 20
for men_len in [0, 16, 32, 64, 128]:
model = XLNetLMHeadModel.from_pretrained('xlnet-base-cased', mem_len=men_len)
start_dt = time.monotonic()
for i in range(epoch):
outputs = model(input_ids=input_ids, mems=mems)
if men_len > 0:
mems = outputs[1]
end_dt = time.monotonic()
print('Average Duration for men_len {}: {}'.format(men_len, round(end_dt-start_dt, 2)))
```
Output is
```
Average Duration for men_len 0: 2.49
Average Duration for men_len 16: 2.62
Average Duration for men_len 32: 2.67
Average Duration for men_len 64: 2.81
Average Duration for men_len 128: 3.28
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1912/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1912/timeline | completed | null | null |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.