url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/8021 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8021/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8021/comments | https://api.github.com/repos/huggingface/transformers/issues/8021/events | https://github.com/huggingface/transformers/issues/8021 | 728,822,358 | MDU6SXNzdWU3Mjg4MjIzNTg= | 8,021 | [bart] SinusoidalPositionalEmbedding breaks under pytorch-nightly | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,604 | 1,604 | CONTRIBUTOR | null | pytorch-nightly breaks these:
```
FAILED tests/test_modeling_bart.py::TestSinusoidalPositionalEmbeddings::test_odd_embed_dim - RuntimeError: a view of a leaf Variable that requires...
FAILED tests/test_modeling_bart.py::TestSinusoidalPositionalEmbeddings::test_positional_emb_cache_logic - RuntimeError: a view of a leaf Variable ...
FAILED tests/test_modeling_bart.py::TestSinusoidalPositionalEmbeddings::test_positional_emb_weights_against_marian - RuntimeError: a view of a lea...
F
```
```
================================================================ test session starts ================================================================
platform linux -- Python 3.8.5, pytest-6.1.1, py-1.9.0, pluggy-0.13.1 -- /home/stas/anaconda3/envs/main-38/bin/python
cachedir: .pytest_cache
rootdir: /mnt/nvme1/code/huggingface/transformers-master
plugins: typeguard-2.10.0, forked-1.3.0, xdist-2.1.0, instafail-0.4.2
collecting ... 2020-10-24 09:14:35.276431: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
collected 1 item
tests/test_modeling_bart.py::TestSinusoidalPositionalEmbeddings::test_odd_embed_dim FAILED
_______________________________________________ TestSinusoidalPositionalEmbeddings.test_odd_embed_dim _______________________________________________
self = <tests.test_modeling_bart.TestSinusoidalPositionalEmbeddings testMethod=test_odd_embed_dim>
def test_odd_embed_dim(self):
with self.assertRaises(NotImplementedError):
SinusoidalPositionalEmbedding(num_positions=4, embedding_dim=5, padding_idx=0).to(torch_device)
# odd num_positions is allowed
> SinusoidalPositionalEmbedding(num_positions=5, embedding_dim=4, padding_idx=0).to(torch_device)
tests/test_modeling_bart.py:627:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/transformers/modeling_bart.py:1331: in __init__
self.weight = self._init_weight(self.weight)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
out = Parameter containing:
tensor([[ 0.1368, 2.6925, 1.3918, 0.7332],
[-1.2813, -0.3071, 1.0553, -0.4325],
...6445],
[-1.6619, -0.2872, 0.6869, 0.6489],
[-1.5226, 0.1161, -0.2026, 0.1853]], requires_grad=True)
@staticmethod
def _init_weight(out: nn.Parameter):
"""Identical to the XLM create_sinusoidal_embeddings except features are not interleaved.
The cos features are in the 2nd half of the vector. [dim // 2:]
"""
n_pos, dim = out.shape
position_enc = np.array(
[[pos / np.power(10000, 2 * (j // 2) / dim) for j in range(dim)] for pos in range(n_pos)]
)
> out[:, 0 : dim // 2] = torch.FloatTensor(np.sin(position_enc[:, 0::2])) # This line breaks for odd n_pos
E RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation.
src/transformers/modeling_bart.py:1342: RuntimeError
============================================================== short test summary info ==============================================================
FAILED tests/test_modeling_bart.py::TestSinusoidalPositionalEmbeddings::test_odd_embed_dim - RuntimeError: a view of a leaf Variable that requires...
=========================================================== 1 failed, 3 warnings in 3.01s ===========================================================
```
## Environment info
```
- `transformers` version: 3.4.0
- Platform: Linux-5.4.0-52-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.0.dev20201023 (True)
- Tensorflow version (GPU?): 2.3.1 (True)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8021/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8021/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8020 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8020/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8020/comments | https://api.github.com/repos/huggingface/transformers/issues/8020/events | https://github.com/huggingface/transformers/issues/8020 | 728,814,000 | MDU6SXNzdWU3Mjg4MTQwMDA= | 8,020 | sentencepiece 0.1.94 causing segmentation fault | {
"login": "mejran",
"id": 823108,
"node_id": "MDQ6VXNlcjgyMzEwOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/823108?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mejran",
"html_url": "https://github.com/mejran",
"followers_url": "https://api.github.com/users/mejran/followers",
"following_url": "https://api.github.com/users/mejran/following{/other_user}",
"gists_url": "https://api.github.com/users/mejran/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mejran/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mejran/subscriptions",
"organizations_url": "https://api.github.com/users/mejran/orgs",
"repos_url": "https://api.github.com/users/mejran/repos",
"events_url": "https://api.github.com/users/mejran/events{/privacy}",
"received_events_url": "https://api.github.com/users/mejran/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This can be worked around, for anyone hitting this issue, by setting `sentencepiece==0.1.91` explicitly.",
"Maybe we could set `sentencepiece==0.1.91` in the setup.py to prevent this from happening, as we already had the issue with the 0.1.92.\r\n\r\nDo you want to open a PR for that?",
"This should also be fixed by #8073 ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,603 | 1,610 | 1,610 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.4.0 and 3.3.1
- Platform: Linux/Sagemaker
- Python version: 3.7
- PyTorch version (GPU?): 1.4
- Tensorflow version (GPU?):
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
## Information
## To reproduce
Steps to reproduce the behavior:
1. `pip install transformers[torch]`
2. `from transformers.trainer import TrainingArguments, Trainer`
`import torch`
3. `torch.tensor([1,2,3])`
`transformers` 3.3.1 seg faults at step 3, `transformers` 3.4 seg faults at step 2.
## Expected behavior
No segmentation fault | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8020/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8020/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8019 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8019/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8019/comments | https://api.github.com/repos/huggingface/transformers/issues/8019/events | https://github.com/huggingface/transformers/issues/8019 | 728,813,612 | MDU6SXNzdWU3Mjg4MTM2MTI= | 8,019 | Colab can't import trim_batch for T5, anything changed in transformers.tokenization_utils? | {
"login": "yxu1168",
"id": 50936877,
"node_id": "MDQ6VXNlcjUwOTM2ODc3",
"avatar_url": "https://avatars.githubusercontent.com/u/50936877?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yxu1168",
"html_url": "https://github.com/yxu1168",
"followers_url": "https://api.github.com/users/yxu1168/followers",
"following_url": "https://api.github.com/users/yxu1168/following{/other_user}",
"gists_url": "https://api.github.com/users/yxu1168/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yxu1168/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yxu1168/subscriptions",
"organizations_url": "https://api.github.com/users/yxu1168/orgs",
"repos_url": "https://api.github.com/users/yxu1168/repos",
"events_url": "https://api.github.com/users/yxu1168/events{/privacy}",
"received_events_url": "https://api.github.com/users/yxu1168/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Had the same issue. Found a trim_batch implementation on the transformers repo, used it, didn't face any issues so far.\r\n\r\n[Link to implementation](https://github.com/huggingface/transformers/blob/783d7d2629e97c5f0c5f9ef01b8c66410275c204/examples/research_projects/rag/utils_rag.py#L35)\r\n\r\nCode for reference:\r\n\r\n```python\r\ndef trim_batch(\r\n input_ids,\r\n pad_token_id,\r\n attention_mask=None,\r\n):\r\n \"\"\"Remove columns that are populated exclusively by pad_token_id\"\"\"\r\n keep_column_mask = input_ids.ne(pad_token_id).any(dim=0)\r\n if attention_mask is None:\r\n return input_ids[:, keep_column_mask]\r\n else:\r\n return (input_ids[:, keep_column_mask], attention_mask[:, keep_column_mask])\r\n```"
] | 1,603 | 1,613 | 1,609 | NONE | null | from transformers.tokenization_utils import trim_batch
ImportError: cannot import name 'trim_batch'
Any solutions? Thanks a lot. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8019/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8019/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8018 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8018/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8018/comments | https://api.github.com/repos/huggingface/transformers/issues/8018/events | https://github.com/huggingface/transformers/issues/8018 | 728,798,403 | MDU6SXNzdWU3Mjg3OTg0MDM= | 8,018 | tutorial document | {
"login": "jc-hou",
"id": 30210529,
"node_id": "MDQ6VXNlcjMwMjEwNTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/30210529?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jc-hou",
"html_url": "https://github.com/jc-hou",
"followers_url": "https://api.github.com/users/jc-hou/followers",
"following_url": "https://api.github.com/users/jc-hou/following{/other_user}",
"gists_url": "https://api.github.com/users/jc-hou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jc-hou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jc-hou/subscriptions",
"organizations_url": "https://api.github.com/users/jc-hou/orgs",
"repos_url": "https://api.github.com/users/jc-hou/repos",
"events_url": "https://api.github.com/users/jc-hou/events{/privacy}",
"received_events_url": "https://api.github.com/users/jc-hou/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"You're correct! @patrickvonplaten, git blame shows you as the author, want to fix?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,603 | 1,609 | 1,609 | NONE | null | In the Translation section in https://huggingface.co/transformers/task_summary.html
"
Here is an example of doing translation using a model and a tokenizer. The process is the following:
1.Instantiate a tokenizer and a model from the checkpoint name. Summarization is usually done using an encoder-decoder model, such as Bart or T5.
2.Define the article that should be summarizaed.
3.Add the T5 specific prefix “translate English to German: “
4.Use the PreTrainedModel.generate() method to perform the translation.
"
1 and 2 seemed copies from Summarization section and not be modified accordingly.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8018/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8018/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8017 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8017/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8017/comments | https://api.github.com/repos/huggingface/transformers/issues/8017/events | https://github.com/huggingface/transformers/pull/8017 | 728,779,312 | MDExOlB1bGxSZXF1ZXN0NTA5NDI0Mzg5 | 8,017 | Create README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8017/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8017/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8017",
"html_url": "https://github.com/huggingface/transformers/pull/8017",
"diff_url": "https://github.com/huggingface/transformers/pull/8017.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8017.patch",
"merged_at": 1603973988000
} |
https://api.github.com/repos/huggingface/transformers/issues/8016 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8016/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8016/comments | https://api.github.com/repos/huggingface/transformers/issues/8016/events | https://github.com/huggingface/transformers/pull/8016 | 728,777,651 | MDExOlB1bGxSZXF1ZXN0NTA5NDIzMTY2 | 8,016 | Mlflow integration callback | {
"login": "noise-field",
"id": 14188757,
"node_id": "MDQ6VXNlcjE0MTg4NzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/14188757?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/noise-field",
"html_url": "https://github.com/noise-field",
"followers_url": "https://api.github.com/users/noise-field/followers",
"following_url": "https://api.github.com/users/noise-field/following{/other_user}",
"gists_url": "https://api.github.com/users/noise-field/gists{/gist_id}",
"starred_url": "https://api.github.com/users/noise-field/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/noise-field/subscriptions",
"organizations_url": "https://api.github.com/users/noise-field/orgs",
"repos_url": "https://api.github.com/users/noise-field/repos",
"events_url": "https://api.github.com/users/noise-field/events{/privacy}",
"received_events_url": "https://api.github.com/users/noise-field/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | # What does this PR do?
This PR adds Trainer integration with [MLflow](https://mlflow.org/).
It is implemented in roughly the same way as other integration callbacks (CometML, wandb) and gets added to the list of Trainer callbacks automatically when mlflow is installed. All the mlflow parameters are configured with env variables, as described in the library documentation. This PR adds an additional environment variable, `HF_MLFLOW_LOG_ARTIFACTS`, which controls whether to use mlflow artifact logging facility to save artifacts generated after training (it doesn't make much sense if mlflow is used locally).
Fixes #7698
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8016/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/8016/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8016",
"html_url": "https://github.com/huggingface/transformers/pull/8016",
"diff_url": "https://github.com/huggingface/transformers/pull/8016.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8016.patch",
"merged_at": 1603719718000
} |
https://api.github.com/repos/huggingface/transformers/issues/8015 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8015/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8015/comments | https://api.github.com/repos/huggingface/transformers/issues/8015/events | https://github.com/huggingface/transformers/pull/8015 | 728,777,467 | MDExOlB1bGxSZXF1ZXN0NTA5NDIzMDI2 | 8,015 | Create README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8015/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8015/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8015",
"html_url": "https://github.com/huggingface/transformers/pull/8015",
"diff_url": "https://github.com/huggingface/transformers/pull/8015.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8015.patch",
"merged_at": 1603973975000
} |
https://api.github.com/repos/huggingface/transformers/issues/8014 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8014/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8014/comments | https://api.github.com/repos/huggingface/transformers/issues/8014/events | https://github.com/huggingface/transformers/issues/8014 | 728,776,918 | MDU6SXNzdWU3Mjg3NzY5MTg= | 8,014 | weird output shape when fine-tuning TFDistilBertForSequenceClassification | {
"login": "rbroc",
"id": 32483140,
"node_id": "MDQ6VXNlcjMyNDgzMTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/32483140?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rbroc",
"html_url": "https://github.com/rbroc",
"followers_url": "https://api.github.com/users/rbroc/followers",
"following_url": "https://api.github.com/users/rbroc/following{/other_user}",
"gists_url": "https://api.github.com/users/rbroc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rbroc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rbroc/subscriptions",
"organizations_url": "https://api.github.com/users/rbroc/orgs",
"repos_url": "https://api.github.com/users/rbroc/repos",
"events_url": "https://api.github.com/users/rbroc/events{/privacy}",
"received_events_url": "https://api.github.com/users/rbroc/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"One probably related issue is that if I try to use `model.evaluate` I get an error with the following traceback:\r\n```\r\n\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n<ipython-input-125-3f03cbe29a62> in <module>\r\n----> 1 model.evaluate(test_dataset)\r\n\r\n~/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in _method_wrapper(self, *args, **kwargs)\r\n 106 def _method_wrapper(self, *args, **kwargs):\r\n 107 if not self._in_multi_worker_mode(): # pylint: disable=protected-access\r\n--> 108 return method(self, *args, **kwargs)\r\n 109 \r\n 110 # Running inside `run_distribute_coordinator` already.\r\n\r\n~/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in evaluate(self, x, y, batch_size, verbose, sample_weight, steps, callbacks, max_queue_size, workers, use_multiprocessing, return_dict)\r\n 1377 with trace.Trace('TraceContext', graph_type='test', step_num=step):\r\n 1378 callbacks.on_test_batch_begin(step)\r\n-> 1379 tmp_logs = test_function(iterator)\r\n 1380 if data_handler.should_sync:\r\n 1381 context.async_wait()\r\n\r\n~/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds)\r\n 778 else:\r\n 779 compiler = \"nonXla\"\r\n--> 780 result = self._call(*args, **kwds)\r\n 781 \r\n 782 new_tracing_count = self._get_tracing_count()\r\n\r\n~/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)\r\n 821 # This is the first call of __call__, so we have to initialize.\r\n 822 initializers = []\r\n--> 823 self._initialize(args, kwds, add_initializers_to=initializers)\r\n 824 finally:\r\n 825 # At this point we know that the initialization is complete (or less\r\n\r\n~/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to)\r\n 695 self._concrete_stateful_fn = (\r\n 696 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access\r\n--> 697 *args, **kwds))\r\n 698 \r\n 699 def invalid_creator_scope(*unused_args, **unused_kwds):\r\n\r\n~/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)\r\n 2853 args, kwargs = None, None\r\n 2854 with self._lock:\r\n-> 2855 graph_function, _, _ = self._maybe_define_function(args, kwargs)\r\n 2856 return graph_function\r\n 2857 \r\n\r\n~/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)\r\n 3211 \r\n 3212 self._function_cache.missed.add(call_context_key)\r\n-> 3213 graph_function = self._create_graph_function(args, kwargs)\r\n 3214 self._function_cache.primary[cache_key] = graph_function\r\n 3215 return graph_function, args, kwargs\r\n\r\n~/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)\r\n 3073 arg_names=arg_names,\r\n 3074 override_flat_arg_shapes=override_flat_arg_shapes,\r\n-> 3075 capture_by_value=self._capture_by_value),\r\n 3076 self._function_attributes,\r\n 3077 function_spec=self.function_spec,\r\n\r\n~/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)\r\n 984 _, original_func = tf_decorator.unwrap(python_func)\r\n 985 \r\n--> 986 func_outputs = python_func(*func_args, **func_kwargs)\r\n 987 \r\n 988 # invariant: `func_outputs` contains only Tensors, CompositeTensors,\r\n\r\n~/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds)\r\n 598 # __wrapped__ allows AutoGraph to swap in a converted function. We give\r\n 599 # the function a weak reference to itself to avoid a reference cycle.\r\n--> 600 return weak_wrapped_fn().__wrapped__(*args, **kwds)\r\n 601 weak_wrapped_fn = weakref.ref(wrapped_fn)\r\n 602 \r\n\r\n~/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)\r\n 971 except Exception as e: # pylint:disable=broad-except\r\n 972 if hasattr(e, \"ag_error_metadata\"):\r\n--> 973 raise e.ag_error_metadata.to_exception(e)\r\n 974 else:\r\n 975 raise\r\n\r\nValueError: in user code:\r\n\r\n /Users/rr48396/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:1224 test_function *\r\n return step_function(self, iterator)\r\n /Users/rr48396/opt/anaconda3/lib/python3.7/site-packages/transformers/modeling_tf_utils.py:142 compute_loss *\r\n return loss_fn(labels, logits)\r\n /Users/rr48396/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/losses.py:149 __call__ **\r\n losses = ag_call(y_true, y_pred)\r\n /Users/rr48396/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/losses.py:253 call **\r\n return ag_fn(y_true, y_pred, **self._fn_kwargs)\r\n /Users/rr48396/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py:201 wrapper\r\n return target(*args, **kwargs)\r\n /Users/rr48396/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/losses.py:1567 sparse_categorical_crossentropy\r\n y_true, y_pred, from_logits=from_logits, axis=axis)\r\n /Users/rr48396/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py:201 wrapper\r\n return target(*args, **kwargs)\r\n /Users/rr48396/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/backend.py:4783 sparse_categorical_crossentropy\r\n labels=target, logits=output)\r\n /Users/rr48396/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py:201 wrapper\r\n return target(*args, **kwargs)\r\n /Users/rr48396/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/nn_ops.py:4175 sparse_softmax_cross_entropy_with_logits_v2\r\n labels=labels, logits=logits, name=name)\r\n /Users/rr48396/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py:201 wrapper\r\n return target(*args, **kwargs)\r\n /Users/rr48396/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/nn_ops.py:4090 sparse_softmax_cross_entropy_with_logits\r\n logits.get_shape()))\r\n\r\n ValueError: Shape mismatch: The shape of labels (received (1,)) should equal the shape of logits except for the last dimension (received (512, 100)).\r\n```\r\n\r\nShouldn't be an error related to my dataset, as it's constructed the same way as in the tutorial...",
"never mind, I didn't realize the model requires manually batching the dataset at prediction. closing :) "
] | 1,603 | 1,603 | 1,603 | NONE | null | I'm trying to fine-tune `TFDistilBertForSequenceClassification` for multi-class classification (100 classes) on a custom dataset following the tutorial at https://huggingface.co/transformers/custom_datasets.html.
I'm following the workflow for fine-tuning in native tensorflow, i.e.:
```
from transformers import TFDistilBertForSequenceClassification
model = TFDistilBertForSequenceClassification.from_pretrained('distilbert-base-uncased')
optimizer = tf.keras.optimizers.Adam(learning_rate=5e-5)
model.compile(optimizer=optimizer, loss=model.compute_loss) # can also use any keras loss fn
model.fit(train_dataset.shuffle(1000).batch(16), epochs=3, batch_size=16)
```
Everything seems to go fine during fine-tuning, but when I try to predict on the test dataset (2000 samples) using `model.predict(test_dataset)`, I get an output with weird shape.
That is, instead of getting an output of shape (1, 2000, 100), I get one with shape (1, 1024000, 100), where 1024000 happens to be number of test examples (2000) times the sequence length (512).
Any hint on what's going on here? Sorry if it's a naïve mistake on my side, I'm new to tf. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8014/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8014/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8013 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8013/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8013/comments | https://api.github.com/repos/huggingface/transformers/issues/8013/events | https://github.com/huggingface/transformers/pull/8013 | 728,729,536 | MDExOlB1bGxSZXF1ZXN0NTA5Mzg5NDE1 | 8,013 | [doc prepare_seq2seq_batch] fix docs | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | MEMBER | null | The `prepare_seq2seq_batch` batch returns `[input_ids, attention_mask, labels]` and not `[input_ids, attention_mask, decoder_input_ids]`. This PR fixes the docs accordingly.
@sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8013/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8013/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8013",
"html_url": "https://github.com/huggingface/transformers/pull/8013",
"diff_url": "https://github.com/huggingface/transformers/pull/8013.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8013.patch",
"merged_at": 1603568028000
} |
https://api.github.com/repos/huggingface/transformers/issues/8012 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8012/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8012/comments | https://api.github.com/repos/huggingface/transformers/issues/8012/events | https://github.com/huggingface/transformers/pull/8012 | 728,726,879 | MDExOlB1bGxSZXF1ZXN0NTA5Mzg3NTgz | 8,012 | Add model_cards for DynaBERT | {
"login": "mazicwong",
"id": 17029801,
"node_id": "MDQ6VXNlcjE3MDI5ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/17029801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mazicwong",
"html_url": "https://github.com/mazicwong",
"followers_url": "https://api.github.com/users/mazicwong/followers",
"following_url": "https://api.github.com/users/mazicwong/following{/other_user}",
"gists_url": "https://api.github.com/users/mazicwong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mazicwong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mazicwong/subscriptions",
"organizations_url": "https://api.github.com/users/mazicwong/orgs",
"repos_url": "https://api.github.com/users/mazicwong/repos",
"events_url": "https://api.github.com/users/mazicwong/events{/privacy}",
"received_events_url": "https://api.github.com/users/mazicwong/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | Add model_cards for DynaBERT_MNLI and DynaBERT_SST-2. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8012/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8012/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8012",
"html_url": "https://github.com/huggingface/transformers/pull/8012",
"diff_url": "https://github.com/huggingface/transformers/pull/8012.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8012.patch",
"merged_at": 1603973958000
} |
https://api.github.com/repos/huggingface/transformers/issues/8011 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8011/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8011/comments | https://api.github.com/repos/huggingface/transformers/issues/8011/events | https://github.com/huggingface/transformers/issues/8011 | 728,701,912 | MDU6SXNzdWU3Mjg3MDE5MTI= | 8,011 | AttributeError: module 'tensorflow.python.keras.utils.generic_utils' has no attribute 'populate_dict_with_module_objects' | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"### Solution\r\n\r\nIf you encounter the same, here is how to fix it:\r\n\r\n```\r\npip uninstall -y tensorflow-gpu tensorflow\r\npip install tensorflow-gpu -U \r\n```\r\n(I assume you want the gpu version - adjust if not)\r\n",
"I think that for the last few versions, when installing `tensorflow` you get a `tensorflow` that can use your GPU out of the box, so there's no need to play with `tensorflow-gpu`/`tensorflow`!",
"I think it's another package's dependency pulling in `tensorflow-gpu` - e.g. I see: `wandb/requirements.txt:tensorflow-gpu==2.3.1`",
"In my case, after `conda install tensorflow-gpu` - to install `tensorflow` version 2.2,\r\nI then tried `pip install autokeras` (because conda does not have this package). \r\n`pip` would install **_another_** `tensorflow` (in this case, version 2.3).\r\n\r\nAnd this is when the problem happened.\r\n\r\n```\r\n>>> import tensorflow as tf\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/longnv/.conda/envs/testENV/lib/python3.6/site-packages/tensorflow/__init__.py\", line 41, in <module>\r\n from tensorflow.python.tools import module_util as _module_util\r\n File \"/home/longnv/.conda/envs/testENV/lib/python3.6/site-packages/tensorflow/python/__init__.py\", line 84, in <module>\r\n from tensorflow.python import keras\r\n File \"/home/longnv/.conda/envs/testENV/lib/python3.6/site-packages/tensorflow/python/keras/__init__.py\", line 27, in <module>\r\n from tensorflow.python.keras import models\r\n File \"/home/longnv/.conda/envs/testENV/lib/python3.6/site-packages/tensorflow/python/keras/models.py\", line 24, in <module>\r\n from tensorflow.python.keras import metrics as metrics_module\r\n File \"/home/longnv/.conda/envs/testENV/lib/python3.6/site-packages/tensorflow/python/keras/metrics.py\", line 37, in <module>\r\n from tensorflow.python.keras.engine import base_layer\r\n File \"/home/longnv/.conda/envs/testENV/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py\", line 51, in <module>\r\n from tensorflow.python.keras import initializers\r\n File \"/home/longnv/.conda/envs/testENV/lib/python3.6/site-packages/tensorflow/python/keras/initializers/__init__.py\", line 127, in <module>\r\n populate_deserializable_objects()\r\n File \"/home/longnv/.conda/envs/testENV/lib/python3.6/site-packages/tensorflow/python/keras/initializers/__init__.py\", line 85, in populate_deserializable_objects\r\n generic_utils.populate_dict_with_module_objects(\r\nAttributeError: module 'tensorflow.python.keras.utils.generic_utils' has no attribute 'populate_dict_with_module_objects'\r\n>>> conda uninstall keras\r\n\r\n```\r\nIt seems like we have similar error.\r\nI am not sure what you have done with your env but your solution worked in my case too, obviously.",
"Please add the snippet of the code into the program of general_utils.py\r\n\r\n**1. Find out your path**\r\n\r\nFor me, the path is listed as follows:\r\n\r\n/home/user/miniconda3/lib/python3.7/site-packages/tensorflow/python/keras/utils/generic_utils.py\r\n\r\n**2. Paste the code into the program of generic_utils.py**\r\n```\r\ndef populate_dict_with_module_objects(target_dict, modules, obj_filter):\r\n for module in modules:\r\n for name in dir(module):\r\n obj = getattr(module, name)\r\n if obj_filter(obj):\r\n target_dict[name] = obj\r\n```\r\n\r\n**3. Initialize the dev tool**\r\nIt will be working after initializing the dev tool (it is Terminal for me in Linux)"
] | 1,603 | 1,668 | 1,603 | CONTRIBUTOR | null | Problem and solution:
This happened several times as of recent. Some env gets messed up and I end up with most tests failing with:
```____________________________________________________ ERROR collecting tests/test_benchmark_tf.py ____________________________________________________
tests/test_benchmark_tf.py:6: in <module>
from transformers import AutoConfig, is_tf_available
src/transformers/__init__.py:22: in <module>
from .integrations import ( # isort:skip
src/transformers/integrations.py:58: in <module>
from .file_utils import is_torch_tpu_available
src/transformers/file_utils.py:59: in <module>
import tensorflow as tf
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/tensorflow/__init__.py:41: in <module>
from tensorflow.python.tools import module_util as _module_util
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/tensorflow/python/__init__.py:84: in <module>
from tensorflow.python import keras
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/tensorflow/python/keras/__init__.py:27: in <module>
from tensorflow.python.keras import models
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/tensorflow/python/keras/models.py:24: in <module>
from tensorflow.python.keras import metrics as metrics_module
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/tensorflow/python/keras/metrics.py:37: in <module>
from tensorflow.python.keras.engine import base_layer
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py:51: in <module>
from tensorflow.python.keras import initializers
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/tensorflow/python/keras/initializers/__init__.py:127: in <module>
populate_deserializable_objects()
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/tensorflow/python/keras/initializers/__init__.py:85: in populate_deserializable_objects
generic_utils.populate_dict_with_module_objects(
E AttributeError: module 'tensorflow.python.keras.utils.generic_utils' has no attribute 'populate_dict_with_module_objects'
collected 0 items / 1 error
```
I think it's conda installing a broken tensorflow, I'm not 100% sure - see the solution in the next comment.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8011/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8011/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8010 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8010/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8010/comments | https://api.github.com/repos/huggingface/transformers/issues/8010/events | https://github.com/huggingface/transformers/issues/8010 | 728,636,859 | MDU6SXNzdWU3Mjg2MzY4NTk= | 8,010 | src->transformers->generation_tf_util.py ->_generate_beam_search->outputs = self(**model_inputs) why self ?There is not a function? | {
"login": "RyanPeking",
"id": 46998598,
"node_id": "MDQ6VXNlcjQ2OTk4NTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/46998598?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RyanPeking",
"html_url": "https://github.com/RyanPeking",
"followers_url": "https://api.github.com/users/RyanPeking/followers",
"following_url": "https://api.github.com/users/RyanPeking/following{/other_user}",
"gists_url": "https://api.github.com/users/RyanPeking/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RyanPeking/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RyanPeking/subscriptions",
"organizations_url": "https://api.github.com/users/RyanPeking/orgs",
"repos_url": "https://api.github.com/users/RyanPeking/repos",
"events_url": "https://api.github.com/users/RyanPeking/events{/privacy}",
"received_events_url": "https://api.github.com/users/RyanPeking/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"is self.generate?",
"Hello! Could you provide all the information relative to your environment as asked in the template, as well as the code that generates the error? Thanks!",
"Thanks for your replying!\r\n\r\noutputs = self(**model_inputs)\r\nThis code appear in function _generate_beam_search and _generate_no_beam_search.\r\nThe 'self' is instance of class, it is not callable, so it can be used like this? There is not a function name like self.generate?\r\n\r\nThus, i just test the genaration_tf_util.py, so i add the test code like this. And i delete unrelated code just for test this before 'while cur_len < max_length: model_inputs = self.prepare_inputs_for_generation(\r\n input_ids, past=past, attention_mask=attention_mask, use_cache=use_cache\r\n )':\r\n\r\nMy environment is:\r\nwindows\r\ntensorflow 2.0.0rc0\r\nnumpy 1.17.2\r\n\r\n\r\n```\r\n# my add\r\n# if __name__ == '__main__':\r\n# a = TFGenerationMixin()\r\n# a._generate_beam_search(input_ids = None,\r\n# cur_len = 10,\r\n# max_length = 100,\r\n# min_length = 5,\r\n# do_sample = None,\r\n# early_stopping = None,\r\n# # num_beams = None,\r\n# temperature = None,\r\n# top_k = None,\r\n# top_p = None,\r\n# repetition_penalty = None,\r\n# no_repeat_ngram_size=None,\r\n# bad_words_ids = None,\r\n# # bos_token_id = None,\r\n# pad_token_id = None,\r\n# eos_token_id = None,\r\n# batch_size=4,\r\n# num_return_sequences=None,\r\n# length_penalty = None,\r\n# num_beams=None,\r\n# vocab_size=None,\r\n# # no_repeat_ngram_size = None,\r\n# # num_return_sequences = None,\r\n# encoder_outputs=None,\r\n# attention_mask = None,\r\n# # decoder_start_token_id = None,\r\n# use_cache = None,\r\n# )\r\n\r\n```\r\n\r\n> Hello! Could you provide all the information relative to your environment as asked in the template, as well as the code that generates the error? Thanks!\r\n\r\n",
"The `self` is the instance of the class, calling it as `self(...)` results in calling the `__call__` method.",
"see https://www.geeksforgeeks.org/callable-in-python/",
"> The `self` is the instance of the class, calling it as `self(...)` results in calling the `__call__` method.\r\n\r\nBut where is __call__ method of class TFGenerationMixin?",
"Ah, my bad, I hadn't taken a close enough look at your code. You can't initialize a `TFGenerationMixin` like this, as it is an abstract class. It is there to have TF classes inherit from that abstract class, not to be used as-is.\r\n\r\nCould you tell me what you're trying to do so I could guide you towards the classes you should use?",
"TFPreTrainedModel inherit from TFGenerationMixin, and TFPreTrainedModel is base class for all TF models. Some TF models have __call__ method, leading to self(..) in TFGenerationMixin can be callable, that's right?\r\n\r\nThis question appear just when I read the code, not for do something. \r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,603 | 1,609 | 1,609 | NONE | null | Traceback (most recent call last):
File "D:\Program Files\JetBrains\PyCharm 2018.3.2\helpers\pydev\pydevd.py", line 1741, in <module>
main()
File "D:\Program Files\JetBrains\PyCharm 2018.3.2\helpers\pydev\pydevd.py", line 1735, in main
globals = debugger.run(setup['file'], None, None, is_module)
File "D:\Program Files\JetBrains\PyCharm 2018.3.2\helpers\pydev\pydevd.py", line 1135, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "D:\Program Files\JetBrains\PyCharm 2018.3.2\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "D:/Github_project/transformers-master/src/transformers/generation_tf_utils.py", line 1128, in <module>
use_cache = None,
File "D:/Github_project/transformers-master/src/transformers/generation_tf_utils.py", line 625, in _generate_beam_search
outputs = self(**model_inputs) # (batch_size * num_beams, cur_len, vocab_size)
TypeError: 'TFGenerationMixin' object is not callable | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8010/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8010/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8009 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8009/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8009/comments | https://api.github.com/repos/huggingface/transformers/issues/8009/events | https://github.com/huggingface/transformers/pull/8009 | 728,527,694 | MDExOlB1bGxSZXF1ZXN0NTA5MjIwMjQ2 | 8,009 | Doc styling | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,605 | 1,603 | COLLABORATOR | null | # What does this PR do?
Add a script that does some styling on the doc files and docstrings.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8009/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8009/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8009",
"html_url": "https://github.com/huggingface/transformers/pull/8009",
"diff_url": "https://github.com/huggingface/transformers/pull/8009.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8009.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8008 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8008/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8008/comments | https://api.github.com/repos/huggingface/transformers/issues/8008/events | https://github.com/huggingface/transformers/issues/8008 | 728,510,836 | MDU6SXNzdWU3Mjg1MTA4MzY= | 8,008 | TextDataset bug with big files | {
"login": "paulomann",
"id": 7051554,
"node_id": "MDQ6VXNlcjcwNTE1NTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/7051554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/paulomann",
"html_url": "https://github.com/paulomann",
"followers_url": "https://api.github.com/users/paulomann/followers",
"following_url": "https://api.github.com/users/paulomann/following{/other_user}",
"gists_url": "https://api.github.com/users/paulomann/gists{/gist_id}",
"starred_url": "https://api.github.com/users/paulomann/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/paulomann/subscriptions",
"organizations_url": "https://api.github.com/users/paulomann/orgs",
"repos_url": "https://api.github.com/users/paulomann/repos",
"events_url": "https://api.github.com/users/paulomann/events{/privacy}",
"received_events_url": "https://api.github.com/users/paulomann/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi! Could you put the whole stack trace here? I fear it might be an internal sentencepiece error, for which we'll be unable to help and you would have more luck opening an issue on the sentencepiece repo directly.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,603 | 1,609 | 1,609 | NONE | null | - `transformers` version: 3.0.2
- Platform: Linux-4.4.0-137-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@mfuntowicz
## Information
The model I am using (Bert, XLNet ...): XLMRobertaTokenizer
The problem arises when using:
* [ x ] my own modified script: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run the following code
```Python
from transformers import XLMRobertaTokenizer, TextDataset
max_pos = 4096
tokenizer = XLMRobertaTokenizer.from_pretrained("xlm-roberta-base", model_max_length=max_pos)
tokenizer.model_max_length = max_pos
tokenizer.init_kwargs["model_max_length"] = max_pos
train_datapath = "path/to/train.raw"
train_dataset = TextDataset(
tokenizer=tokenizer,
file_path=train_datapath,
block_size=tokenizer.max_len
)
```
## Error Messages
```
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Aborted
```
## Expected behavior
My file `train.raw` contains 349912098 words with 2,3G. When I try with a small dataset (262703214 words and 251M), it works fine. I have modified this line in the TextDataset class https://github.com/huggingface/transformers/blob/a16e568f22a4d07813ba76343309ec20096115a5/src/transformers/data/datasets/language_modeling.py#L68 to understand where is the problem. It happens in the `tokenizer.tokenize(text)` part. I have changed it not to tokenize the entire text directly but to process chunks of the text each time, concatenating the result in a final list. Although memory hungry, this method works fine (and memory is not my problem).
**Note:** I have executed this script in a machine with 504G of RAM, and the script used approx. 36G when it died.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8008/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8008/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8007 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8007/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8007/comments | https://api.github.com/repos/huggingface/transformers/issues/8007/events | https://github.com/huggingface/transformers/pull/8007 | 728,497,162 | MDExOlB1bGxSZXF1ZXN0NTA5MTk1MzEw | 8,007 | Ci test tf super slow | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"`tests/test_tokenization_fsmt.py::FSMTTokenizationTest::test_match_encode_decode` fixed in https://github.com/huggingface/transformers/pull/8031\r\n\r\nI checked it off in your list.",
"@patrickvonplaten can you take a look at the failing TF-Longformer and TF-T5 tests?",
"> @patrickvonplaten can you take a look at the failing TF-Longformer and TF-T5 tests?\r\n\r\nWill take a look at the TF-Longformer Test :-) I tried a bit unsuccessfully to debug the `TF-T5` test for 2h a week ago and I didn't manage to get rid of `cast_bool_....` for `TFT5` at the moment. I think the hacky `cast_bool_...` function is the reason for the T5 Test Failure. Not sure if it's worth it spending a lot of time here again. :-/ I could comment out the test for now? Or @jplu @LysandreJik do you have a nice TF insight to solve it? ",
"For now I propose to comment them out. I will do a pass over it later in a couple of weeks. I planned to go through each TF model anyway.",
"I commented both TF T5 tests out ATM, same for the longformer. I think the Longformer test can be fixed, but the T5 tests cannot, at least not without introducing breaking changes.\r\n\r\nI think that we would need to have an additional layer between the `TFT5Model` and the encoder/decoder models, same as we do with every other model (the xxxMainLayer), otherwise we won't be able to use the saved model. Even with the `cast_bool_....` they're not currently usable as saved models.\r\n\r\nIt's the same issue with BART, and why those two tests are commented in BART as well imo.",
"All tests are passing now. The only error on the scheduled is because of something on the hub with one tiny models, which I'm fixing right now."
] | 1,603 | 1,604 | 1,604 | MEMBER | null | Enable the slow TF suite on GPU. Below are the current failing tests.
Investigating whether they're actually failing or if it's for some other reason.
- [x] ~FAILED tests/test_modeling_marian.py::ModelManagementTests::test_model_names~
- [x] ~FAILED tests/test_modeling_prophetnet.py::ProphetNetModelIntegrationTest::test_cnndm_inference~
- [x] ~FAILED tests/test_modeling_prophetnet.py::ProphetNetModelIntegrationTest::test_pretrained_checkpoint_hidden_states~
- [x] ~FAILED tests/test_modeling_prophetnet.py::ProphetNetModelIntegrationTest::test_question_gen_inference~
- [x] ~FAILED tests/test_modeling_roberta.py::RobertaModelIntegrationTest::test_inference_classification_head~
- [x] ~FAILED tests/test_modeling_roberta.py::RobertaModelIntegrationTest::test_inference_masked_lm~
- [x] ~FAILED tests/test_modeling_roberta.py::RobertaModelIntegrationTest::test_inference_no_head~
- [x] ~FAILED tests/test_modeling_squeezebert.py::SqueezeBertModelIntegrationTest::test_inference_classification_head~
- [x] ~FAILED tests/test_modeling_tf_albert.py::TFAlbertModelTest::test_saved_model_with_attentions_output~
- [x] ~FAILED tests/test_modeling_tf_albert.py::TFAlbertModelTest::test_saved_model_with_hidden_states_output~
- [x] ~FAILED tests/test_modeling_tf_camembert.py::TFCamembertModelIntegrationTest::test_output_embeds_base_model~
- [x] ~FAILED tests/test_modeling_tf_electra.py::TFElectraModelTest::test_saved_model_with_attentions_output~
- [x] ~FAILED tests/test_modeling_tf_electra.py::TFElectraModelTest::test_saved_model_with_hidden_states_output~
- [x] ~FAILED tests/test_modeling_tf_flaubert.py::TFFlaubertModelTest::test_saved_model_with_attentions_output~
- [x] ~FAILED tests/test_modeling_tf_flaubert.py::TFFlaubertModelTest::test_saved_model_with_hidden_states_output~
- [x] ~FAILED tests/test_modeling_tf_funnel.py::TFFunnelModelTest::test_saved_model_with_attentions_output~
- [x] ~FAILED tests/test_modeling_tf_funnel.py::TFFunnelModelTest::test_saved_model_with_hidden_states_output~
- [x] ~FAILED tests/test_modeling_tf_funnel.py::TFFunnelBaseModelTest::test_saved_model_with_attentions_output~
- [x] ~FAILED tests/test_modeling_tf_funnel.py::TFFunnelBaseModelTest::test_saved_model_with_hidden_states_output~
- [x] FAILED tests/test_modeling_tf_longformer.py::TFLongformerModelTest::test_saved_model_with_attentions_output
- [x] ~FAILED tests/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_saved_model_with_attentions_output~
- [x] ~FAILED tests/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_saved_model_with_hidden_states_output~
- [x] ~FAILED tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_model_from_pretrained~
- [x] FAILED tests/test_modeling_tf_t5.py::TFT5ModelTest::test_saved_model_with_attentions_output
- [x] FAILED tests/test_modeling_tf_t5.py::TFT5ModelTest::test_saved_model_with_hidden_states_output
- [x] ~FAILED tests/test_modeling_tf_xlm_roberta.py::TFFlaubertModelIntegrationTest::test_output_embeds_base_model~
- [x] ~FAILED tests/test_modeling_tf_xlnet.py::TFXLNetModelLanguageGenerationTest::test_lm_generate_xlnet_base_cased~
- [x] ~FAILED tests/test_modeling_transfo_xl.py::TransfoXLModelLanguageGenerationTest::test_lm_generate_transfo_xl_wt103~
- [x] ~FAILED tests/test_modeling_xlm_prophetnet.py::XLMProphetNetModelIntegrationTest::test_ntg_hidden_states~
- [x] ~FAILED tests/test_modeling_xlm_prophetnet.py::XLMProphetNetModelIntegrationTest::test_pretrained_checkpoint_hidden_states~
- [x] ~FAILED tests/test_modeling_xlm_prophetnet.py::XLMProphetNetModelIntegrationTest::test_xprophetnet_ntg_inference~
- [x] ~FAILED tests/test_modeling_xlm_roberta.py::XLMRobertaModelIntegrationTest::test_xlm_roberta_base~
- [x] ~FAILED tests/test_modeling_xlm_roberta.py::XLMRobertaModelIntegrationTest::test_xlm_roberta_large~
- [x] FAILED tests/test_pipelines.py::PipelineCommonTests::test_tf_defaults - Value...
- [x] ~FAILED tests/test_tokenization_fsmt.py::FSMTTokenizationTest::test_match_encode_decode~
And while we're at it here are the remaining failing tests in PyTorch slow multi-gpu tests:
- [x] FAILED tests/test_data_collator.py::DataCollatorIntegrationTest::test_nsp - K...
- [x] FAILED tests/test_modeling_common.py::ModelUtilsTest::test_model_from_pretrained
- [x] ~FAILED tests/test_modeling_rag.py::RagModelIntegrationTests::test_rag_sequence_generate_batch~
- [x] ~FAILED tests/test_modeling_rag.py::RagModelIntegrationTests::test_rag_sequence_generate_beam~
- [x] ~FAILED tests/test_modeling_rag.py::RagModelIntegrationTests::test_rag_token_generate_batch~
- [x] ~FAILED tests/test_modeling_rag.py::RagModelIntegrationTests::test_rag_token_generate_beam~
- [x] ~FAILED tests/test_modeling_rag.py::RagModelIntegrationTests::test_rag_token_inference~
- [x] FAILED tests/test_pipelines.py::MonoColumnInputTestCase::test_integration_torch_conversation
- [x] FAILED tests/test_pipelines.py::MonoColumnInputTestCase::test_integration_torch_conversation_truncated_history
- [x] FAILED tests/test_pipelines.py::DialoguePipelineTests::test_torch_conversation
- [x] FAILED tests/test_pipelines.py::PipelineCommonTests::test_pt_defaults - Value...
The RAG integration tests seem to happen because of OOM errors, deactivated them in a multi-gpu setup as the GPUs have less memory.
Status:
Done! Waiting for the green tests to merge. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8007/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8007/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8007",
"html_url": "https://github.com/huggingface/transformers/pull/8007",
"diff_url": "https://github.com/huggingface/transformers/pull/8007.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8007.patch",
"merged_at": 1604067948000
} |
https://api.github.com/repos/huggingface/transformers/issues/8006 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8006/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8006/comments | https://api.github.com/repos/huggingface/transformers/issues/8006/events | https://github.com/huggingface/transformers/pull/8006 | 728,328,758 | MDExOlB1bGxSZXF1ZXN0NTA5MDU1ODM2 | 8,006 | [tokenizers] Fixing #8001 - Adding tests on tokenizers serialization | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | MEMBER | null | # What does this PR do?
Fixes #8001
Now the tokenizers classes have to send all the keyword arguments of the `__init__` up to the base class of the tokenizer (by `super().__init__`) were they are stored in `init_kwargs` for serialized saving/reloading with `save_pretrained/from_pretrained`.
Adding a test on tokenizers serialization that all the keyword arguments of the `__init__` are found in the saved `init_kwargs` to avoid forgetting to send some arguments up in future (and current) tokenizers.
Make T5 tokenizer serialization more robust.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8006/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8006/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8006",
"html_url": "https://github.com/huggingface/transformers/pull/8006",
"diff_url": "https://github.com/huggingface/transformers/pull/8006.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8006.patch",
"merged_at": 1603704468000
} |
https://api.github.com/repos/huggingface/transformers/issues/8005 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8005/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8005/comments | https://api.github.com/repos/huggingface/transformers/issues/8005/events | https://github.com/huggingface/transformers/issues/8005 | 728,312,850 | MDU6SXNzdWU3MjgzMTI4NTA= | 8,005 | Differences between facebook/bart-base and facebook/bart-large? | {
"login": "leoribeiro",
"id": 839917,
"node_id": "MDQ6VXNlcjgzOTkxNw==",
"avatar_url": "https://avatars.githubusercontent.com/u/839917?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leoribeiro",
"html_url": "https://github.com/leoribeiro",
"followers_url": "https://api.github.com/users/leoribeiro/followers",
"following_url": "https://api.github.com/users/leoribeiro/following{/other_user}",
"gists_url": "https://api.github.com/users/leoribeiro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leoribeiro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leoribeiro/subscriptions",
"organizations_url": "https://api.github.com/users/leoribeiro/orgs",
"repos_url": "https://api.github.com/users/leoribeiro/repos",
"events_url": "https://api.github.com/users/leoribeiro/events{/privacy}",
"received_events_url": "https://api.github.com/users/leoribeiro/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"If you look at\r\n\r\nhttps://s3.amazonaws.com/models.huggingface.co/bert/facebook/bart-large/config.json\r\nand \r\nhttps://s3.amazonaws.com/models.huggingface.co/bert/facebook/bart-base/config.json\r\n(how to do this for any model: go to [model hub](https://s3.amazonaws.com/models.huggingface.co/) and click see raw config file)\r\n\r\nyou will see different `task_specific_params`. These are used for fine-tuning by default so bart-large \r\nis forced to generate at least 56 tokens.\r\n\r\nThere are many ways to fix. Easiest is to comment out this line https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune.py#L66\r\n\r\nMore involved would be to make a local copy of the config and insert the generation parameters you want. You can pass it to finetune.py with `--config_name`.\r\n\r\n\r\nI will think about how to update bart-base and bart-large to have more reasonable task_specific_params.",
"cc @patil-suraj @stas00 @patrickvonplaten for awareness of a very sneaky bug.",
"@sshleifer , thank you very much for your reply. Indeed, I have checked those configurations. So I changed the parameters for the `generate` method to consider min_length=0:\r\n\r\n```\r\n generated_ids = self.model.generate(\r\n batch[\"input_ids\"],\r\n attention_mask=batch[\"attention_mask\"],\r\n use_cache=True,\r\n decoder_start_token_id=self.decoder_start_token_id,\r\n num_beams=self.eval_beams,\r\n no_repeat_ngram_size=0,\r\n min_length=0,\r\n max_length=self.eval_max_length,\r\n length_penalty=1.0\r\n )\r\n```\r\n\r\nI used this code for both `facebook/bart-base` and `facebook/bart-large`. And the outputs for `bart-large` are as I mentioned. I have been trying to figure out the reason in the last days without success. Maybe I'm doing some wrong, but I could not discover what it is yet.\r\n\r\nAnother point is that the generation for `bart-large` is much slower than `bart-base`. Maybe it is because the model is generating tokens until the limit (max_length).",
"How did you call `generate` to produce the outputs in your Issue Description?\r\nYour change to finetune.py will not change the config.\r\n",
"This is my `_generative_step` method: \r\n```\r\n def _generative_step(self, batch: dict) -> dict:\r\n t0 = time.time()\r\n\r\n generated_ids = self.model.generate(\r\n batch[\"input_ids\"],\r\n attention_mask=batch[\"attention_mask\"],\r\n use_cache=True,\r\n decoder_start_token_id=self.decoder_start_token_id,\r\n num_beams=self.eval_beams,\r\n no_repeat_ngram_size=0,\r\n min_length=0,\r\n max_length=self.eval_max_length,\r\n length_penalty=1.0\r\n )\r\n gen_time = (time.time() - t0) / batch[\"input_ids\"].shape[0]\r\n preds: List[str] = self.ids_to_clean_text(generated_ids)\r\n target: List[str] = self.ids_to_clean_text(batch[\"labels\"])\r\n\r\n a = self.tokenizer.batch_decode(batch[\"input_ids\"].tolist())\r\n b = self.tokenizer.batch_decode(batch[\"labels\"].tolist())\r\n c = self.tokenizer.batch_decode(generated_ids)\r\n pad_token_id = self.tokenizer.pad_token_id\r\n tgt_ids = batch[\"labels\"]\r\n if isinstance(self.model, T5ForConditionalGeneration):\r\n decoder_input_ids = self.model._shift_right(tgt_ids)\r\n else:\r\n decoder_input_ids = shift_tokens_right(tgt_ids, pad_token_id)\r\n e = self.tokenizer.batch_decode(decoder_input_ids.tolist())\r\n\r\n loss_tensors = self._step(batch)\r\n base_metrics = {name: loss for name, loss in zip(self.loss_names, loss_tensors)}\r\n rouge: Dict = self.calc_generative_metrics(preds, target)\r\n summ_len = np.mean(lmap(len, generated_ids))\r\n base_metrics.update(gen_time=gen_time, gen_len=summ_len, preds=preds, target=target, a=a, b=b, c=c, e=e, **rouge)\r\n return base_metrics\r\n```\r\n\r\n`_step` method:\r\n\r\n```\r\n def _step(self, batch: dict) -> Tuple:\r\n pad_token_id = self.tokenizer.pad_token_id\r\n src_ids, src_mask = batch[\"input_ids\"], batch[\"attention_mask\"]\r\n tgt_ids = batch[\"labels\"]\r\n if isinstance(self.model, T5ForConditionalGeneration):\r\n decoder_input_ids = self.model._shift_right(tgt_ids)\r\n else:\r\n decoder_input_ids = shift_tokens_right(tgt_ids, pad_token_id)\r\n if not self.already_saved_batch: # This would be slightly better if it only happened on rank zero\r\n batch[\"decoder_input_ids\"] = decoder_input_ids\r\n self.save_readable_batch(batch)\r\n\r\n outputs = self(src_ids, attention_mask=src_mask, decoder_input_ids=decoder_input_ids, use_cache=False)\r\n lm_logits = outputs[0]\r\n if self.hparams.label_smoothing == 0:\r\n # Same behavior as modeling_bart.py, besides ignoring pad_token_id\r\n ce_loss_fct = torch.nn.CrossEntropyLoss(ignore_index=pad_token_id)\r\n\r\n assert lm_logits.shape[-1] == self.vocab_size\r\n loss = ce_loss_fct(lm_logits.view(-1, lm_logits.shape[-1]), tgt_ids.view(-1))\r\n else:\r\n lprobs = torch.nn.functional.log_softmax(lm_logits, dim=-1)\r\n loss, nll_loss = label_smoothed_nll_loss(\r\n lprobs, tgt_ids, self.hparams.label_smoothing, ignore_index=pad_token_id\r\n )\r\n return (loss,)\r\n```\r\n\r\nThis is my validation_epoch_end:\r\n\r\n```\r\n def validation_epoch_end(self, outputs, prefix=\"val\") -> Dict:\r\n self.step_count += 1\r\n losses = {k: torch.stack([x[k] for x in outputs]).mean() for k in self.loss_names}\r\n loss = losses[\"loss\"]\r\n generative_metrics = {\r\n k: np.array([x[k] for x in outputs]).mean() for k in self.metric_names + [\"gen_time\", \"gen_len\"]\r\n }\r\n metric_val = (\r\n generative_metrics[self.val_metric] if self.val_metric in generative_metrics else losses[self.val_metric]\r\n )\r\n metric_tensor: torch.FloatTensor = torch.tensor(metric_val).type_as(loss)\r\n generative_metrics.update({k: v.item() for k, v in losses.items()})\r\n losses.update(generative_metrics)\r\n all_metrics = {f\"{prefix}_avg_{k}\": x for k, x in losses.items()}\r\n all_metrics[\"step_count\"] = self.step_count\r\n self.metrics[prefix].append(all_metrics) # callback writes this to self.metrics_save_path\r\n preds = flatten_list([x[\"preds\"] for x in outputs])\r\n\r\n val_outputs_folder = \"val_outputs\"\r\n os.system(\"mkdir -p \" + os.path.join(self.hparams.output_dir, val_outputs_folder))\r\n\r\n if \"preds\" in outputs[0]:\r\n tb_all = {}\r\n idx_tb = 0\r\n for output_batch in outputs:\r\n a,b,c,e = output_batch[\"a\"], output_batch[\"b\"], output_batch[\"c\"], output_batch[\"e\"]\r\n\r\n\r\n for aa,bb,ee,cc in zip(a,b,e,c):\r\n tb_all[idx_tb] = {}\r\n tb_all[idx_tb]['input_ids'] = aa\r\n tb_all[idx_tb]['labels'] = bb\r\n tb_all[idx_tb]['decoder_input_ids'] = ee\r\n tb_all[idx_tb]['generated_ids'] = cc\r\n idx_tb += 1\r\n\r\n file_debug = os.path.join(self.hparams.output_dir, val_outputs_folder,\r\n \"debug_\" +\r\n str(self.step_count) + \".json\")\r\n save_json(tb_all, file_debug)\r\n\r\n return {\r\n \"log\": all_metrics,\r\n \"preds\": preds,\r\n f\"{prefix}_loss\": loss,\r\n f\"{prefix}_{self.val_metric}\": metric_tensor,\r\n }\r\n```\r\n\r\n\r\nSo I use the `debug_k.json` file to check the outputs. Sorry for the variable names.\r\n\r\nOne example for `bart-base`:\r\n\r\n```\r\n \"1366\": {\r\n \"input_ids\": \"<s> ( report :ARG1 ( station :ARG1 ( troop :mod ( country :wiki Russia :name ( name :op1 Russia ) ) :ARG0-of ( withdraw :ARG2 ( country :quant 3 :location ( sea :wiki Baltic_Sea :name ( name :op1 Baltic :op2 Sea ) ) ) ) ) :ARG2 ( and :op1 ( state :wiki - :name ( name :op1 Jalininggele ) :location country ) :op2 ( state :wiki - :name ( name :op1 Simolingsike ) ) :op3 ( city :wiki - :name ( name :op1 Yelinia ) :location ( relative-position :op1 ( city :wiki Moscow :name ( name :op1 Moscow ) ) :quant ( distance-quantity :quant 300 :unit ( kilometer ) ) ) ) ) :mod ( respective ) ) )</s><pad><pad><pad>\",\r\n \"labels\": \"<s> It is reported that the Russian troops that withdrew from the three Baltic Sea countries will be stationed respectively in the Russian state of Jalininggele, the state of Simolingsike and Yelinia city which is 300 kilometers away from Moscow.</s>\",\r\n \"decoder_input_ids\": \"</s><s> It is reported that the Russian troops that withdrew from the three Baltic Sea countries will be stationed respectively in the Russian state of Jalininggele, the state of Simolingsike and Yelinia city which is 300 kilometers away from Moscow.\",\r\n \"generated_ids\": \"</s><s> Russian troops withdrawing from 3 Baltic Sea countries are reported to have respectively been stationed in the Baltic Sea states of Jalininggele,Simolingsike and Yelinia 300 kilometers away from Moscow.</s>\"\r\n },\r\n```\r\n\r\none example for `bart-large`:\r\n```\r\n \"1366\": {\r\n \"input_ids\": \"<s> ( report :ARG1 ( station :ARG1 ( troop :mod ( country :wiki Russia :name ( name :op1 Russia ) ) :ARG0-of ( withdraw :ARG2 ( country :quant 3 :location ( sea :wiki Baltic_Sea :name ( name :op1 Baltic :op2 Sea ) ) ) ) ) :ARG2 ( and :op1 ( state :wiki - :name ( name :op1 Jalininggele ) :location country ) :op2 ( state :wiki - :name ( name :op1 Simolingsike ) ) :op3 ( city :wiki - :name ( name :op1 Yelinia ) :location ( relative-position :op1 ( city :wiki Moscow :name ( name :op1 Moscow ) ) :quant ( distance-quantity :quant 300 :unit ( kilometer ) ) ) ) ) :mod ( respective ) ) )</s><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad>\",\r\n \"labels\": \"<s> It is reported that the Russian troops that withdrew from the three Baltic Sea countries will be stationed respectively in the Russian state of Jalininggele, the state of Simolingsike and Yelinia city which is 300 kilometers away from Moscow.</s>\",\r\n \"decoder_input_ids\": \"</s><s> It is reported that the Russian troops that withdrew from the three Baltic Sea countries will be stationed respectively in the Russian state of Jalininggele, the state of Simolingsike and Yelinia city which is 300 kilometers away from Moscow.\",\r\n \"generated_ids\": \"</s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s></s>\"\r\n },\r\n```\r\n\r\n\r\n\r\n",
"@sshleifer I have changed the code (`3.3.1`) version in order to use the same processed decoder input for the model as the one used in transformer version `2.11.0` and it worked for both BARTs! Both BARTs (`facebook/bart-base` and `facebook/bart-large`) give good BLEU scores and generate good outputs!\r\n\r\nThe changed code:\r\n```\r\n def _step(self, batch: dict) -> Tuple:\r\n pad_token_id = self.tokenizer.pad_token_id\r\n src_ids, src_mask = batch[\"input_ids\"], batch[\"attention_mask\"]\r\n if isinstance(self.model, T5ForConditionalGeneration):\r\n tgt_ids = batch[\"labels\"]\r\n decoder_input_ids = self.model._shift_right(tgt_ids)\r\n else:\r\n #decoder_input_ids = shift_tokens_right(tgt_ids, pad_token_id)\r\n y = batch[\"labels\"]\r\n decoder_input_ids = y[:, :-1].contiguous()\r\n tgt_ids = y[:, 1:].clone()\r\n if not self.already_saved_batch: # This would be slightly better if it only happened on rank zero\r\n batch[\"decoder_input_ids\"] = decoder_input_ids\r\n self.save_readable_batch(batch)\r\n\r\n outputs = self(src_ids, attention_mask=src_mask, decoder_input_ids=decoder_input_ids, use_cache=False)\r\n lm_logits = outputs[0]\r\n if self.hparams.label_smoothing == 0:\r\n # Same behavior as modeling_bart.py, besides ignoring pad_token_id\r\n ce_loss_fct = torch.nn.CrossEntropyLoss(ignore_index=pad_token_id)\r\n\r\n assert lm_logits.shape[-1] == self.vocab_size\r\n loss = ce_loss_fct(lm_logits.view(-1, lm_logits.shape[-1]), tgt_ids.view(-1))\r\n else:\r\n lprobs = torch.nn.functional.log_softmax(lm_logits, dim=-1)\r\n loss, nll_loss = label_smoothed_nll_loss(\r\n lprobs, tgt_ids, self.hparams.label_smoothing, ignore_index=pad_token_id\r\n )\r\n return (loss,)\r\n```\r\n\r\nan example generated by `facebook/bart-base` using the new code:\r\n```\r\n \"1366\": {\r\n \"input_ids\": \"<s> ( report :ARG1 ( station :ARG1 ( troop :mod ( country :wiki Russia :name ( name :op1 Russia ) ) :ARG0-of ( withdraw :ARG2 ( country :quant 3 :location ( sea :wiki Baltic_Sea :name ( name :op1 Baltic :op2 Sea ) ) ) ) ) :ARG2 ( and :op1 ( state :wiki - :name ( name :op1 Jalininggele ) :location country ) :op2 ( state :wiki - :name ( name :op1 Simolingsike ) ) :op3 ( city :wiki - :name ( name :op1 Yelinia ) :location ( relative-position :op1 ( city :wiki Moscow :name ( name :op1 Moscow ) ) :quant ( distance-quantity :quant 300 :unit ( kilometer ) ) ) ) ) :mod ( respective ) ) )</s><pad><pad><pad>\",\r\n \"labels\": \" It is reported that the Russian troops that withdrew from the three Baltic Sea countries will be stationed respectively in the Russian state of Jalininggele, the state of Simolingsike and Yelinia city which is 300 kilometers away from Moscow.</s>\",\r\n \"decoder_input_ids\": \"<s> It is reported that the Russian troops that withdrew from the three Baltic Sea countries will be stationed respectively in the Russian state of Jalininggele, the state of Simolingsike and Yelinia city which is 300 kilometers away from Moscow.\",\r\n \"generated_ids\": \"</s> Russian troops withdrawing from 3 Baltic Sea countries have been reported to be stationed respectively in Jalininggele, Simolingsike and Yelinia 300 kilometers (200 miles) from Moscow.</s><pad><pad>\"\r\n },\r\n```\r\n\r\nan example generated by `facebook/bart-large` using the new code:\r\n```\r\n \"1366\": {\r\n \"input_ids\": \"<s> ( report :ARG1 ( station :ARG1 ( troop :mod ( country :wiki Russia :name ( name :op1 Russia ) ) :ARG0-of ( withdraw :ARG2 ( country :quant 3 :location ( sea :wiki Baltic_Sea :name ( name :op1 Baltic :op2 Sea ) ) ) ) ) :ARG2 ( and :op1 ( state :wiki - :name ( name :op1 Jalininggele ) :location country ) :op2 ( state :wiki - :name ( name :op1 Simolingsike ) ) :op3 ( city :wiki - :name ( name :op1 Yelinia ) :location ( relative-position :op1 ( city :wiki Moscow :name ( name :op1 Moscow ) ) :quant ( distance-quantity :quant 300 :unit ( kilometer ) ) ) ) ) :mod ( respective ) ) )</s><pad><pad><pad>\",\r\n \"labels\": \" It is reported that the Russian troops that withdrew from the three Baltic Sea countries will be stationed respectively in the Russian state of Jalininggele, the state of Simolingsike and Yelinia city which is 300 kilometers away from Moscow.</s>\",\r\n \"decoder_input_ids\": \"<s> It is reported that the Russian troops that withdrew from the three Baltic Sea countries will be stationed respectively in the Russian state of Jalininggele, the state of Simolingsike and Yelinia city which is 300 kilometers away from Moscow.\",\r\n \"generated_ids\": \"</s> The Russian troop stations were respectively located in Jalininggele, Simolingsike and Yelinia located 300 kilometers (250 miles) away from Moscow in 3 countries on the Baltic Sea.</s><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad>\"\r\n },\r\n```\r\n\r\nWhat I don't understand is why the previous version only works for `bart-base`, in my experiments. Another question is what is the correct/better way to use the model (to use `shift_tokens_right` or another approach?)\r\n\r\n",
"Interesting.\r\n`shift_tokens_right` has always done better on my datasets, but it's interesting that you have the opposite experience. The old code `tgt_ids = y[:, 1:].clone()` doesn't work well for tokenizers (Marian, Pegasus, T5) that don't add a `<s>` token to the beginning of the sequence, because it deletes a token.\r\n\r\nIf you can replicate the results on a small/shareable dataset I would be happy to try to understand what's going on more deeply.",
"I can see a changing behavior of `bart-large` between v3.0.2 and v3.1.0, which seems to be linked to your findings. Here's a minimal example for language generation:\r\n```py\r\nimport transformers\r\n\r\nfrom transformers import (\r\n BartTokenizer,\r\n BartForConditionalGeneration,\r\n)\r\n\r\nprint(f'** transformers v{transformers.__version__} **')\r\n\r\ntokenizer = BartTokenizer.from_pretrained('facebook/bart-large')\r\nmodel = BartForConditionalGeneration.from_pretrained('facebook/bart-large')\r\n\r\ninput_txt = 'This is <mask> sentence.'\r\nprint(f'Input: \"{input_txt}\"')\r\n\r\ninputs = tokenizer.encode(input_txt, return_tensors='pt')\r\noutputs = model.generate(inputs)\r\noutput_txt = tokenizer.decode(outputs[0], skip_special_tokens=True)\r\n\r\nprint(f'Output: \"{output_txt}\"')\r\n```\r\nFor v3.0.2, it correctly produces\r\n```bash\r\n** transformers v3.0.2 **\r\nSome weights of BartForConditionalGeneration were not initialized from the model checkpoint at facebook/bart-large and are newly initialized: ['final_logits_bias']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\nInput: \"This is <mask> sentence.\"\r\nOutput: \"This is a partial sentence.\"\r\n```\r\nwhile v3.1.0 repeats the first token:\r\n```bash\r\n** transformers v3.1.0 **\r\nInput: \"This is <mask> sentence.\"\r\nOutput: \"ThisThis is a sentence.\"\r\n```",
"Digging a bit deeper, I can trace the issue back to this line https://github.com/huggingface/transformers/blob/4b3ee9cbc53c6cf6cee6bfae86cc2c6ec0778ee5/src/transformers/modeling_bart.py#L1114\r\nand, in turn, the default value of `force_bos_token_to_be_generated`:\r\nhttps://github.com/huggingface/transformers/blob/4b3ee9cbc53c6cf6cee6bfae86cc2c6ec0778ee5/src/transformers/configuration_bart.py#L140\r\n\r\nTo restore behavior from v3.0.2, we can change that value manually\r\n```py\r\n...\r\nconfig = BartConfig.from_pretrained('facebook/bart-large')\r\nconfig.force_bos_token_to_be_generated = True\r\n\r\ntokenizer = BartTokenizer.from_pretrained('facebook/bart-large')\r\nmodel = BartForConditionalGeneration.from_pretrained('facebook/bart-large', config=config)\r\n...\r\n```\r\nwhich gives\r\n```bash\r\n** transformers v3.1.0 **\r\nInput: \"This is <mask> sentence.\"\r\nOutput: \"This is a partial sentence.\"\r\n```\r\nand even\r\n```bash\r\n** transformers v3.4.0 **\r\nInput: \"This is <mask> sentence.\"\r\nOutput: \"This is a partial sentence.\"\r\n```\r\n@sshleifer What's the best approach to fix this? Modify bart-large's config.json?",
"Your solution is awesome, great catch!\r\n\r\nI think the right fix is to\r\n+ Update the docs\r\n+ add `task_specific_params : {'fill_mask': {'force_bos_token_to_be_generated': 'true'}` to `bart-base` and `bart-large` configs.\r\n\r\nI am hesitant to change the default because `force_bos_token_to_be_generated = False` seems to be optimal for many fine-tuning tasks.",
"Added a mask filling example to the docs in #8421 .",
":+1: Brilliant, thanks a lot @sshleifer !",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hello \r\nI'm using transformers 4.8.2 but there's still issue about same problem.\r\nI changed config.force_bos_token_to_be_generated=True.\r\n********The Result**************\r\ninput_txt = 'This is <mask> sentence.'\r\noutput_txt = 'ThisThis is a sentence.'\r\n\r\nanyone experience this??? ",
"> Hello I'm using transformers 4.8.2 but there's still issue about same problem. I changed config.force_bos_token_to_be_generated=True. ********The Result************** input_txt = 'This is sentence.' output_txt = 'ThisThis is a sentence.'\r\n> \r\n> anyone experience this???\r\n\r\nHi @yeonsookKwak I have the same issue. Would you please share the solution if any? Thanks! "
] | 1,603 | 1,641 | 1,610 | NONE | null | # ❓ Questions & Help
Is there some more difference between `facebook/bart-base` and `facebook/bart-large` (other than dimensions, heads and layers)?
## Who can help
@sshleifer @WiseDoge
## Environment info
- transformers version: 3.3.1
- Python version: 3.6.12
- PyTorch version (GPU?): 1.4.0 GPU-version
## Command:
I'm using the seq2seq/finetune.py script to finetune both BARTs.
```
python finetune.py \
--data_dir=${DATA_DIR} \
--learning_rate=3e-5 \
--num_train_epochs 5 \
--task summarization \
--model_name_or_path=${MODEL} \
--train_batch_size=4 \
--eval_batch_size=4 \
--gpus 1 \
--output_dir=$OUTPUT_DIR \
--max_source_length=256 \
--max_target_length=256 \
--val_max_target_length=256 \
--test_max_target_length=256 \
--eval_max_gen_length=256 \
--do_train --do_predict \
--eval_beams 5
```
${MODEL} model can be `facebook/bart-base` or `facebook/bart-large`
## Details
When I finetune facebook/bart-base, it works well:
```
"input_ids": " <s> ( report :ARG1 ( station :ARG1 ( troop :mod ( country :wiki Russia :name ( name :op1 Russia ) ) :ARG0-of ( withdraw :ARG2 ( country :quant 3 :location ( sea :wiki Baltic_Sea :name ( name :op1 Baltic :op2 Sea ) ) ) ) ) :ARG2 ( and :op1 ( state :wiki - :name ( name :op1 Jalininggele ) :location country ) :op2 ( state :wiki - :name ( name :op1 Simolingsike ) ) :op3 ( city :wiki - :name ( name :op1 Yelinia ) :location ( relative-position :op1 ( city :wiki Moscow :name ( name :op1 Moscow ) ) :quant ( distance-quantity :quant 300 :unit ( kilometer ) ) ) ) ) :mod ( respective ) ) )</s><pad><pad><pad>",
"labels": "<s> It is reported that the Russian troops that withdrew from the three Baltic Sea countries will be stationed respectively in the Russian state of Jalininggele, the state of Simolingsike and Yelinia city which is 300 kilometers away from Moscow.</s>",
"decoder_input_ids": "</s><s> It is reported that the Russian troops that withdrew from the three Baltic Sea countries will be stationed respectively in the Russian state of Jalininggele, the state of Simolingsike and Yelinia city which is 300 kilometers away from Moscow.",
"generated_ids": "</s><s> Russian troops reported to be stationed in the 3 Baltic Sea countries of Jalininggele, Simolingsike and Yelinia 300 kilometers (110 miles) from Moscow.</s><pad><pad><pad><pad><pad><pad><pad>"
```
When I finetune facebook/bart-large, it did not generate a reasonable output:
```
"input_ids": "<s> ( report :ARG1 ( station :ARG1 ( troop :mod ( country :wiki Russia :name ( name :op1 Russia ) ) :ARG0-of ( withdraw :ARG2 ( country :quant 3 :location ( sea :wiki Baltic_Sea :name ( name :op1 Baltic :op2 Sea ) ) ) ) ) :ARG2 ( and :op1 ( state :wiki - :name ( name :op1 Jalininggele ) :location country ) :op2 ( state :wiki - :name ( name :op1 Simolingsike ) ) :op3 ( city :wiki - :name ( name :op1 Yelinia ) :location ( relative-position :op1 ( city :wiki Moscow :name ( name :op1 Moscow ) ) :quant ( distance-quantity :quant 300 :unit ( kilometer ) ) ) ) ) :mod ( respective ) ) )</s><pad><pad><pad>",
"labels": "<s> It is reported that the Russian troops that withdrew from the three Baltic Sea countries will be stationed respectively in the Russian state of Jalininggele, the state of Simolingsike and Yelinia city which is 300 kilometers away from Moscow.</s>",
"decoder_input_ids": "</s><s> It is reported that the Russian troops that withdrew from the three Baltic Sea countries will be stationed respectively in the Russian state of Jalininggele, the state of Simolingsike and Yelinia city which is 300 kilometers away from Moscow.",
"generated_ids": "</s><s><s><s><s><s><s><s><s><s><s> ... <s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s></s>"
```
I'm using the same code, but only `facebook/bart-base` model works. In a previous transformer version, both worked, but not in this one (3.3.1).
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8005/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8005/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8004 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8004/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8004/comments | https://api.github.com/repos/huggingface/transformers/issues/8004/events | https://github.com/huggingface/transformers/pull/8004 | 728,221,526 | MDExOlB1bGxSZXF1ZXN0NTA4OTY4NzQ3 | 8,004 | german medbert readme | {
"login": "smanjil",
"id": 11598535,
"node_id": "MDQ6VXNlcjExNTk4NTM1",
"avatar_url": "https://avatars.githubusercontent.com/u/11598535?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/smanjil",
"html_url": "https://github.com/smanjil",
"followers_url": "https://api.github.com/users/smanjil/followers",
"following_url": "https://api.github.com/users/smanjil/following{/other_user}",
"gists_url": "https://api.github.com/users/smanjil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/smanjil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/smanjil/subscriptions",
"organizations_url": "https://api.github.com/users/smanjil/orgs",
"repos_url": "https://api.github.com/users/smanjil/repos",
"events_url": "https://api.github.com/users/smanjil/events{/privacy}",
"received_events_url": "https://api.github.com/users/smanjil/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Closed in favor of #8002 "
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8004/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8004/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8004",
"html_url": "https://github.com/huggingface/transformers/pull/8004",
"diff_url": "https://github.com/huggingface/transformers/pull/8004.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8004.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8003 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8003/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8003/comments | https://api.github.com/repos/huggingface/transformers/issues/8003/events | https://github.com/huggingface/transformers/pull/8003 | 728,176,353 | MDExOlB1bGxSZXF1ZXN0NTA4OTMxMzkw | 8,003 | Create model card for bert-italian-cased-finetuned-pos | {
"login": "sachaarbonel",
"id": 18029834,
"node_id": "MDQ6VXNlcjE4MDI5ODM0",
"avatar_url": "https://avatars.githubusercontent.com/u/18029834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sachaarbonel",
"html_url": "https://github.com/sachaarbonel",
"followers_url": "https://api.github.com/users/sachaarbonel/followers",
"following_url": "https://api.github.com/users/sachaarbonel/following{/other_user}",
"gists_url": "https://api.github.com/users/sachaarbonel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sachaarbonel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sachaarbonel/subscriptions",
"organizations_url": "https://api.github.com/users/sachaarbonel/orgs",
"repos_url": "https://api.github.com/users/sachaarbonel/repos",
"events_url": "https://api.github.com/users/sachaarbonel/events{/privacy}",
"received_events_url": "https://api.github.com/users/sachaarbonel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8003/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8003/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8003",
"html_url": "https://github.com/huggingface/transformers/pull/8003",
"diff_url": "https://github.com/huggingface/transformers/pull/8003.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8003.patch",
"merged_at": 1603465086000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/8002 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8002/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8002/comments | https://api.github.com/repos/huggingface/transformers/issues/8002/events | https://github.com/huggingface/transformers/pull/8002 | 728,092,275 | MDExOlB1bGxSZXF1ZXN0NTA4ODYyNzE4 | 8,002 | Create README.md | {
"login": "smanjil",
"id": 11598535,
"node_id": "MDQ6VXNlcjExNTk4NTM1",
"avatar_url": "https://avatars.githubusercontent.com/u/11598535?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/smanjil",
"html_url": "https://github.com/smanjil",
"followers_url": "https://api.github.com/users/smanjil/followers",
"following_url": "https://api.github.com/users/smanjil/following{/other_user}",
"gists_url": "https://api.github.com/users/smanjil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/smanjil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/smanjil/subscriptions",
"organizations_url": "https://api.github.com/users/smanjil/orgs",
"repos_url": "https://api.github.com/users/smanjil/repos",
"events_url": "https://api.github.com/users/smanjil/events{/privacy}",
"received_events_url": "https://api.github.com/users/smanjil/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,603 | 1,607 | 1,607 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8002/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8002/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8002",
"html_url": "https://github.com/huggingface/transformers/pull/8002",
"diff_url": "https://github.com/huggingface/transformers/pull/8002.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8002.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8001 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8001/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8001/comments | https://api.github.com/repos/huggingface/transformers/issues/8001/events | https://github.com/huggingface/transformers/issues/8001 | 728,086,757 | MDU6SXNzdWU3MjgwODY3NTc= | 8,001 | do_lower_case not saved/loaded correctly for Tokenizers | {
"login": "tholor",
"id": 1563902,
"node_id": "MDQ6VXNlcjE1NjM5MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1563902?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tholor",
"html_url": "https://github.com/tholor",
"followers_url": "https://api.github.com/users/tholor/followers",
"following_url": "https://api.github.com/users/tholor/following{/other_user}",
"gists_url": "https://api.github.com/users/tholor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tholor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tholor/subscriptions",
"organizations_url": "https://api.github.com/users/tholor/orgs",
"repos_url": "https://api.github.com/users/tholor/repos",
"events_url": "https://api.github.com/users/tholor/events{/privacy}",
"received_events_url": "https://api.github.com/users/tholor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Oh! I'll take a look, thanks for the report @tholor ",
"Thanks for the fast fix @thomwolf ! Very much appreciated!"
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 3.4.0
- Platform: Linux-5.4.0-52-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.6
- PyTorch version (GPU?): 1.5.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@mfuntowicz
## Information
The `do_lower_case` property of BertTokenizer is not correctly restored after saving / loading.
## To reproduce
```python
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
print(tokenizer.do_lower_case)
tokenizer.save_pretrained("debug_tokenizer")
tokenizer_loaded = BertTokenizer.from_pretrained("debug_tokenizer")
print(tokenizer_loaded.do_lower_case)
```
returns
```
False
True
```
## Expected behavior
Same object attributes after saving / loading | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8001/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8001/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8000 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8000/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8000/comments | https://api.github.com/repos/huggingface/transformers/issues/8000/events | https://github.com/huggingface/transformers/issues/8000 | 728,031,905 | MDU6SXNzdWU3MjgwMzE5MDU= | 8,000 | How to load tokenizer for models without vocab.txt? | {
"login": "havetry",
"id": 49902228,
"node_id": "MDQ6VXNlcjQ5OTAyMjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/49902228?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/havetry",
"html_url": "https://github.com/havetry",
"followers_url": "https://api.github.com/users/havetry/followers",
"following_url": "https://api.github.com/users/havetry/following{/other_user}",
"gists_url": "https://api.github.com/users/havetry/gists{/gist_id}",
"starred_url": "https://api.github.com/users/havetry/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/havetry/subscriptions",
"organizations_url": "https://api.github.com/users/havetry/orgs",
"repos_url": "https://api.github.com/users/havetry/repos",
"events_url": "https://api.github.com/users/havetry/events{/privacy}",
"received_events_url": "https://api.github.com/users/havetry/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hello! In order to use the xlm-roberta-large model, why don't you use the `from_pretrained` method?\r\n\r\n```py\r\nfrom transformers import XLMRobertaModel\r\n\r\nmodel = XLMRobertaModel.from_pretrained(\"xlm-roberta-large\")\r\n```",
"> Hello! In order to use the xlm-roberta-large model, why don't you use the `from_pretrained` method?\r\n> \r\n> ```python\r\n> from transformers import XLMRobertaModel\r\n> \r\n> model = XLMRobertaModel.from_pretrained(\"xlm-roberta-large\")\r\n> ```\r\nThanks for your answer!\r\nI tried do it like what you said, but I couldn't linked the URL to download the model, so I try to download the model、cofig、tokenizer to local and load it. \r\nso, The question what I said was I have not find the vocab.txt to generate the tokenizer.",
"You don't need the URL to download the model, you can just use the identifier as its shown in my message. Or is there a reason why you want to have the URLs? ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,603 | 1,609 | 1,609 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
I want to use xlm-roberta-large model, but "https://huggingface.co/"just give the file named "xlm-roberta-large-tokenizer.json", and have no "vocab.txt", so how to use the package “XLMRobertaTokenizer” to load the the file "xlm-roberta-large-tokenizer.json"?
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8000/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8000/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7999 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7999/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7999/comments | https://api.github.com/repos/huggingface/transformers/issues/7999/events | https://github.com/huggingface/transformers/pull/7999 | 727,957,464 | MDExOlB1bGxSZXF1ZXN0NTA4NzU0Njkx | 7,999 | Add model cards for DynaBERT | {
"login": "mazicwong",
"id": 17029801,
"node_id": "MDQ6VXNlcjE3MDI5ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/17029801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mazicwong",
"html_url": "https://github.com/mazicwong",
"followers_url": "https://api.github.com/users/mazicwong/followers",
"following_url": "https://api.github.com/users/mazicwong/following{/other_user}",
"gists_url": "https://api.github.com/users/mazicwong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mazicwong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mazicwong/subscriptions",
"organizations_url": "https://api.github.com/users/mazicwong/orgs",
"repos_url": "https://api.github.com/users/mazicwong/repos",
"events_url": "https://api.github.com/users/mazicwong/events{/privacy}",
"received_events_url": "https://api.github.com/users/mazicwong/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | # What does this PR do?
Add model cards for DynaBERT. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7999/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7999/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7999",
"html_url": "https://github.com/huggingface/transformers/pull/7999",
"diff_url": "https://github.com/huggingface/transformers/pull/7999.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7999.patch",
"merged_at": 1603464834000
} |
https://api.github.com/repos/huggingface/transformers/issues/7998 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7998/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7998/comments | https://api.github.com/repos/huggingface/transformers/issues/7998/events | https://github.com/huggingface/transformers/pull/7998 | 727,916,290 | MDExOlB1bGxSZXF1ZXN0NTA4NzIwMzA1 | 7,998 | update version for scipy | {
"login": "suliuzh",
"id": 27858725,
"node_id": "MDQ6VXNlcjI3ODU4NzI1",
"avatar_url": "https://avatars.githubusercontent.com/u/27858725?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/suliuzh",
"html_url": "https://github.com/suliuzh",
"followers_url": "https://api.github.com/users/suliuzh/followers",
"following_url": "https://api.github.com/users/suliuzh/following{/other_user}",
"gists_url": "https://api.github.com/users/suliuzh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/suliuzh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/suliuzh/subscriptions",
"organizations_url": "https://api.github.com/users/suliuzh/orgs",
"repos_url": "https://api.github.com/users/suliuzh/repos",
"events_url": "https://api.github.com/users/suliuzh/events{/privacy}",
"received_events_url": "https://api.github.com/users/suliuzh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Let's wait for Victor to answer on that issue before merging.",
"looks good!"
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | # What does this PR do?
updating version requirement for scipy in 'examples\\distillation\\requirements.txt'.
fix [https://github.com/huggingface/transformers/issues/7967](url)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7998/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7998/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7998",
"html_url": "https://github.com/huggingface/transformers/pull/7998",
"diff_url": "https://github.com/huggingface/transformers/pull/7998.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7998.patch",
"merged_at": 1603717017000
} |
https://api.github.com/repos/huggingface/transformers/issues/7997 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7997/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7997/comments | https://api.github.com/repos/huggingface/transformers/issues/7997/events | https://github.com/huggingface/transformers/pull/7997 | 727,861,065 | MDExOlB1bGxSZXF1ZXN0NTA4NjcyMTEw | 7,997 | Create README.md | {
"login": "mazicwong",
"id": 17029801,
"node_id": "MDQ6VXNlcjE3MDI5ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/17029801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mazicwong",
"html_url": "https://github.com/mazicwong",
"followers_url": "https://api.github.com/users/mazicwong/followers",
"following_url": "https://api.github.com/users/mazicwong/following{/other_user}",
"gists_url": "https://api.github.com/users/mazicwong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mazicwong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mazicwong/subscriptions",
"organizations_url": "https://api.github.com/users/mazicwong/orgs",
"repos_url": "https://api.github.com/users/mazicwong/repos",
"events_url": "https://api.github.com/users/mazicwong/events{/privacy}",
"received_events_url": "https://api.github.com/users/mazicwong/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,603 | 1,604 | 1,603 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
Add model card about DynaBERT in HuggingFace.io.
## Before submitting
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7997/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7997/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7997",
"html_url": "https://github.com/huggingface/transformers/pull/7997",
"diff_url": "https://github.com/huggingface/transformers/pull/7997.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7997.patch",
"merged_at": 1603464818000
} |
https://api.github.com/repos/huggingface/transformers/issues/7996 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7996/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7996/comments | https://api.github.com/repos/huggingface/transformers/issues/7996/events | https://github.com/huggingface/transformers/pull/7996 | 727,840,507 | MDExOlB1bGxSZXF1ZXN0NTA4NjU2MTY1 | 7,996 | Added model cards for Tagalog ELECTRA models | {
"login": "jcblaisecruz02",
"id": 24757547,
"node_id": "MDQ6VXNlcjI0NzU3NTQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/24757547?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jcblaisecruz02",
"html_url": "https://github.com/jcblaisecruz02",
"followers_url": "https://api.github.com/users/jcblaisecruz02/followers",
"following_url": "https://api.github.com/users/jcblaisecruz02/following{/other_user}",
"gists_url": "https://api.github.com/users/jcblaisecruz02/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jcblaisecruz02/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jcblaisecruz02/subscriptions",
"organizations_url": "https://api.github.com/users/jcblaisecruz02/orgs",
"repos_url": "https://api.github.com/users/jcblaisecruz02/repos",
"events_url": "https://api.github.com/users/jcblaisecruz02/events{/privacy}",
"received_events_url": "https://api.github.com/users/jcblaisecruz02/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"@jcblaisecruz02 looks like your config.json files do not contain a `architectures` field nor a `model_type`, so your models might be incorrectly categorized – any way you could add those? Thank you!",
"> @jcblaisecruz02 looks like your config.json files do not contain a `architectures` field nor a `model_type`, so your models might be incorrectly categorized – any way you could add those? Thank you!\r\n\r\nAh gotcha! I'll add those. Thanks!",
"(easiest way should be to just call `.save_pretrained` again)"
] | 1,603 | 1,603 | 1,603 | NONE | null | # What does this PR do?
Added model cards for eight ELECTRA Tagalog models (four generators and four discriminators). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7996/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7996/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7996",
"html_url": "https://github.com/huggingface/transformers/pull/7996",
"diff_url": "https://github.com/huggingface/transformers/pull/7996.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7996.patch",
"merged_at": 1603464742000
} |
https://api.github.com/repos/huggingface/transformers/issues/7995 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7995/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7995/comments | https://api.github.com/repos/huggingface/transformers/issues/7995/events | https://github.com/huggingface/transformers/pull/7995 | 727,838,660 | MDExOlB1bGxSZXF1ZXN0NTA4NjU0NzQ5 | 7,995 | [CI] generate separate report files as artifacts | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This looks good but I only see `test_output.txt` in the artifacts, for some reason?",
"you must be looking at the wrong job? As I said I only did it for one job at the moment - this one:\r\nhttps://app.circleci.com/pipelines/github/huggingface/transformers/14359/workflows/b38c8e8c-2867-4366-a907-a202da9bc9ee/jobs/104410/steps\r\n```\r\n ~/transformers/output.txt\r\n ~/transformers/tests_durations.txt\r\n ~/transformers/tests_failures.txt\r\n ~/transformers/tests_passes.txt\r\n ~/transformers/tests_short_summary.txt\r\n ~/transformers/tests_stats.txt\r\n```",
"Ah my bad, I miscklicked!",
"@sshleifer or @sgugger - I configured github artifacts in `self-push.yaml` of this PR - would one of you be able to start that job for me as I have no perms to do so. Thank you very much!\r\n\r\nI hope I did it right, I added:\r\n```\r\n - name: test suite reports artifacts\r\n uses: actions/upload-artifact@v2\r\n with:\r\n name: tests_results\r\n path: tests_*\r\n```\r\nI'm not sure whether this should be `path: ~/transformers/tests_*` like it was on circle_ci config - it should pick it up from the cwd.\r\n\r\nI currently added it only to `run_tests_torch_and_tf_gpu` - so in theory it should upload the reports to the workflow results.\r\n\r\nFor reference, the information on this setup is at this 2 pages:\r\n* https://docs.github.com/en/free-pro-team@latest/actions/guides/storing-workflow-data-as-artifacts\r\n* https://github.com/actions/upload-artifact#usage\r\n",
"I can't figure out how to run a github actions workflow against a branch. It looks good enough that we I'm happy to just acknowledge that this could break on merge, in which case we'd send a follow up PR.",
"Thank you for trying, @sshleifer \r\n\r\nAh, it's not finished yet, merge-wise - it's very rough on edges.\r\n* I just want to figure out how to make the results available on github actions in parallel with\r\n* waiting on you guys to hear what reports do you want and which not before finalizing this.\r\n\r\nCan you suggest a different way of testing this? This was your recommendation in first place - to test it on a PR branch - except I can't test it since I don't have permissions to access the runners. Surely there must be a way of testing this?\r\n\r\nAlternatively, we could go as simple as creating a new github workflow job that simply runs a job of `echo test > tests_1.txt; echo test2 > tests_2.txt` and then uploads `tests_*` as an artifact and checking that it is what you want. It should just work, since the docs suggest that as an example. Once we know it's working then the rest is easy.\r\n\r\nEarlier you were talking about some possible problems with this - something about the job being always successful, I can't find that comment - but I am pretty sure there is no such issue with the approach I implemented - where `pytest` generates all the report files and we don't need to do anything about its log parsing.",
"> waiting on you guys to hear what reports do you want and which not before finalizing this.\r\n\r\nDon't wait, just make a sensible choice that's easy to change. Lean towards fewer reports.\r\n\r\n> Can you suggest a different way of testing this? \r\n\r\nI don't know a good way of testing github actions. [act](https://github.com/nektos/act) looks promising, but I've never used it. The issue is not permissions it is that github workflows, afaict, cannot be run against arbitrary branches. There is a \"rerun all jobs\" button, but it will just rerun on master. Would be incredibly valuable if you figured out how to test github actions locally.\r\n\r\nHere is everything I can see for self-push at https://github.com/huggingface/transformers/actions/runs/326336555/workflow\r\n\r\n",
"I agree with Sam that we can merge to test and iterate if the reports look wrong (as soon as we're sure that the circleCI part is good to go, which we can test on this PR). From what I understand, the PR adds a new job, so it does not break the existing ones/reports.",
"I will work on completing this and we can put it in for one circle-ci and one github workflow and see how it goes - thank you for your feedback, @sshleifer and @sgugger ",
"This is good to merge."
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | This PR solves https://github.com/huggingface/transformers/issues/7887 to produce easier to use reports on CIs.
* [x] adds an optional `--make_reports=id` to `pytest`- e.g. `--make_reports=examples`. It then uses that id to generate `report_{id}_{reports}.txt` - this was needed since some jobs like on scheduled jobs have multiple pytest runs, so a unique string is required. W/o this new flag everything remains as is - i.e. no reports get generated
* [x] the generated reports are all saved under `reports` to simplify the upload and are at the moment (assuming `id` was `tests`):
- report_tests_durations.txt
- report_tests_errors.txt
- report_tests_failures.txt
- report_tests_passes.txt
- report_tests_short_summary.txt
- report_tests_stats.txt
- report_tests_warnings.txt
We no longer need any `pytests` flags to generate these - e.g. no need for `-rA` or `-durations=` - they are all done internally.
The code itself is a bit of a hack, that borrows a lot of `pytest` internals - but that's a start - I will see if I can find a public API to accomplish the same later if this new functionality catches on. Actually, it's pretty safe since it calls the same report functions `pytest` uses, so it's unlikely to break.
* [x] added the reporting to:
- CirlcleCI `run_examples_torch` and `run_tests_torch` jobs
- GitHub workflow `run_all_tests_torch_and_tf_gpu` job. (this one generates 3 (!) groups of reports)
Once these are tested on `master` and the results are satisfactory, I will add this new functionality to the rest of the jobs.
**This is what you want to review**:
- the latest [report](https://app.circleci.com/pipelines/github/huggingface/transformers/14586/workflows/21a114bc-c65b-4b62-b747-a0056923479a/jobs/106843)
- the corresponding [artifacts](https://app.circleci.com/pipelines/github/huggingface/transformers/14586/workflows/21a114bc-c65b-4b62-b747-a0056923479a/jobs/106843/artifacts)
Fixes: #7887
@sshleifer, @sgugger, @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7995/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7995/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7995",
"html_url": "https://github.com/huggingface/transformers/pull/7995",
"diff_url": "https://github.com/huggingface/transformers/pull/7995.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7995.patch",
"merged_at": 1603805108000
} |
https://api.github.com/repos/huggingface/transformers/issues/7994 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7994/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7994/comments | https://api.github.com/repos/huggingface/transformers/issues/7994/events | https://github.com/huggingface/transformers/issues/7994 | 727,804,903 | MDU6SXNzdWU3Mjc4MDQ5MDM= | 7,994 | BertTokenizer's add_token won't add token | {
"login": "HenryPaik1",
"id": 42961175,
"node_id": "MDQ6VXNlcjQyOTYxMTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/42961175?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HenryPaik1",
"html_url": "https://github.com/HenryPaik1",
"followers_url": "https://api.github.com/users/HenryPaik1/followers",
"following_url": "https://api.github.com/users/HenryPaik1/following{/other_user}",
"gists_url": "https://api.github.com/users/HenryPaik1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HenryPaik1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HenryPaik1/subscriptions",
"organizations_url": "https://api.github.com/users/HenryPaik1/orgs",
"repos_url": "https://api.github.com/users/HenryPaik1/repos",
"events_url": "https://api.github.com/users/HenryPaik1/events{/privacy}",
"received_events_url": "https://api.github.com/users/HenryPaik1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It seems not work until I save the new tokenizer. I close the issue.\r\n```\r\ntokenizer.save_pretrained('/path/to/tokenizer')\r\ntokenizer = BertTokenizer.from_pretrained('/path/to/tokenizer')\r\n```",
"looks like tokenizer.vocab_size does not update after add tokens. but len(tokenizer) shows correct number ",
"Yes, the `vocab_size` only contains the information relative to the initial vocabulary. You can find the added tokens either in `tokenizer.get_added_vocab()`, which returns the dictionary, or `tokenizer.added_tokens_encoder`, which returns the amount of added tokens."
] | 1,603 | 1,603 | 1,603 | NONE | null | `add_token` actually won't add token. Please refer to the code below:
```
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)
tokenizer.vocab_size
>>30522
tokenizer.add_tokens(new_tokens=['[SUBJ]', '[OBJ]'], special_tokens=True)
>>2
tokenizer.vocab['[OBJ]']
>> KeyError: '[OBJ]'
tokenizer.vocab_size
>>30522 # not changed
tokenizer.tokenize('[OBJ]')
>>['[', 'ob', '##j', ']'] # expected: '[OBJ]'
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7994/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7994/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7993 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7993/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7993/comments | https://api.github.com/repos/huggingface/transformers/issues/7993/events | https://github.com/huggingface/transformers/pull/7993 | 727,731,547 | MDExOlB1bGxSZXF1ZXN0NTA4NTY3NjQ1 | 7,993 | [docs] [testing] distributed training | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"One followup is to update the `test_trainer_distributed` to work with pytest. Then ideally, if we could have one command to run all those tests, that would be awesome (maybe we can use a pytest marker to mark all distributed-specific tests so it's easy to select them all?)",
"I will port `test_trainer_distributed` - thank you for flagging that, @sgugger \r\nTracking it here: https://github.com/huggingface/transformers/issues/8058"
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | We figured out how to support distributed training with `pytest`, this is a preliminary doc snippet to help those in need to find the current implementation. I'm sure it will evolve as we have more tests with varying needs, but for now that's all we have.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7993/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7993/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7993",
"html_url": "https://github.com/huggingface/transformers/pull/7993",
"diff_url": "https://github.com/huggingface/transformers/pull/7993.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7993.patch",
"merged_at": 1603714506000
} |
https://api.github.com/repos/huggingface/transformers/issues/7992 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7992/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7992/comments | https://api.github.com/repos/huggingface/transformers/issues/7992/events | https://github.com/huggingface/transformers/pull/7992 | 727,728,185 | MDExOlB1bGxSZXF1ZXN0NTA4NTY0ODMz | 7,992 | update zero shot default widget example | {
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"repos_url": "https://api.github.com/users/joeddav/repos",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | # What does this PR do?
Just changing bart's zero shot widget example. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7992/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7992/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7992",
"html_url": "https://github.com/huggingface/transformers/pull/7992",
"diff_url": "https://github.com/huggingface/transformers/pull/7992.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7992.patch",
"merged_at": 1603401581000
} |
https://api.github.com/repos/huggingface/transformers/issues/7991 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7991/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7991/comments | https://api.github.com/repos/huggingface/transformers/issues/7991/events | https://github.com/huggingface/transformers/pull/7991 | 727,717,308 | MDExOlB1bGxSZXF1ZXN0NTA4NTU1Nzc5 | 7,991 | [Reformer] remove reformer pad_token_id | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | MEMBER | null | # What does this PR do?
The `crime-and-punishment` tokenizer actually does not have a `pad_token_id` - check with this [notebook](https://colab.research.google.com/github/google/trax/blob/master/trax/models/reformer/text_generation.ipynb#scrollTo=iDgvKNa_DDIq). Since this is our only tokenizer for Reformer, we should remove the `pad_token` completely from the Reformer Tokenizer script (otherwise `tokenizer.pad_token_id` get's an id >= tokenizer.max_len`).
Since `crime-and-punishment` runs on causal attention, any token can be set to the padding token during inference.
Thus before padding one should do `tokenizer.pad_token = tokenizer.eos_token`.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #7929
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7991/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7991/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7991",
"html_url": "https://github.com/huggingface/transformers/pull/7991",
"diff_url": "https://github.com/huggingface/transformers/pull/7991.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7991.patch",
"merged_at": 1603463355000
} |
https://api.github.com/repos/huggingface/transformers/issues/7990 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7990/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7990/comments | https://api.github.com/repos/huggingface/transformers/issues/7990/events | https://github.com/huggingface/transformers/pull/7990 | 727,693,335 | MDExOlB1bGxSZXF1ZXN0NTA4NTM1OTIw | 7,990 | Handling longformer model_type | {
"login": "ethanjperez",
"id": 6402205,
"node_id": "MDQ6VXNlcjY0MDIyMDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6402205?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ethanjperez",
"html_url": "https://github.com/ethanjperez",
"followers_url": "https://api.github.com/users/ethanjperez/followers",
"following_url": "https://api.github.com/users/ethanjperez/following{/other_user}",
"gists_url": "https://api.github.com/users/ethanjperez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ethanjperez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ethanjperez/subscriptions",
"organizations_url": "https://api.github.com/users/ethanjperez/orgs",
"repos_url": "https://api.github.com/users/ethanjperez/repos",
"events_url": "https://api.github.com/users/ethanjperez/events{/privacy}",
"received_events_url": "https://api.github.com/users/ethanjperez/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @patil-suraj @patrickvonplaten @xixiaoyao"
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | Updating the run_squad training script to handle the "longformer" `model_type`. The longformer is trained in the same was as RoBERTa, so I've added the "longformer" `model_type` (that's the right hugginface name for the LongFormer model, right?) everywhere there was a "roberta" `model_type` reference. The longformer (like RoBERTa) doesn't use `token_type_ids` (as I understand from looking at the [longformer notebook](https://github.com/patil-suraj/Notebooks/blob/master/longformer_qa_training.ipynb), which is what gets updated after this change.
This fix might be related to [this issue](https://github.com/huggingface/transformers/issues/7249) with SQuAD training when using run_squad.py
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7990/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7990/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7990",
"html_url": "https://github.com/huggingface/transformers/pull/7990",
"diff_url": "https://github.com/huggingface/transformers/pull/7990.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7990.patch",
"merged_at": 1603463647000
} |
https://api.github.com/repos/huggingface/transformers/issues/7989 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7989/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7989/comments | https://api.github.com/repos/huggingface/transformers/issues/7989/events | https://github.com/huggingface/transformers/pull/7989 | 727,681,964 | MDExOlB1bGxSZXF1ZXN0NTA4NTI2NTMy | 7,989 | [gh ci] less output ( --durations=50) | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thank you for the heads up - I will be working on all these related issues shortly - too much data indeed, but not just that.",
"I'd say remove them complete for now and also -rA - I need to experiment and see how to make this data available w/o making logs unusable. ",
"More context on github actions:\r\n\r\n\r\nif we can somehow catch the return value of \r\nbash\r\n```\r\nx= python -m pytest -n 1 --dist=loadfile -s examples --durations=50 | tee test_output.txt\r\nsave test_output.txt # always succeeds\r\nsys.exit(x)\r\n```\r\nor something like that, we can make huge progress on the github actions issue and start making artifacts files.\r\n\r\nThe reason artifacts files broke was that even in the below code, even if line 1 raises an error, line 2 succeeds so github actions thinks the job succeeded\r\n```bash\r\npython -m pytest -n 1 --dist=loadfile -s examples --durations=50 | tee test_output.txt\r\nsave test_output.txt # always succeeds\r\n```",
"oh, unless I'm missing something, we don't need any of the workarounds. \r\n\r\nI already have the first requested component (failures) working, see: https://github.com/huggingface/transformers/pull/7995\r\n\r\nCheck out the resulting artifacts:\r\nhttps://app.circleci.com/pipelines/github/huggingface/transformers/14354/workflows/1ccd616e-218f-4ae1-b413-91d2faa0e942/jobs/104363/artifacts\r\n\r\nthis is what we want right?\r\n\r\n`pytest` provides hooks for doing this kind of work, so it's just figuring out which hooks to call.\r\n\r\nIn your example instead of `x = cmd` what you need to save is `$?` which is the exit status of the command.\r\n",
"That's great, but note that this is all much easier in circleci. My ask is to make it work in github actions.\r\nThe failures are already pretty easy to find in circleci.\r\n\r\n\r\n2) you mean\r\n```bash\r\nx= python -m pytest -n 1 --dist=loadfile -s examples --durations=50 | tee test_output.txt\r\nsave test_output.txt # always succeeds\r\nsys.exit($x)\r\n```\r\n?",
"`test_failures.txt` is really nice!",
"Ah, good point. let me see what other handy reports I can squeese in circle-ci and then I will move to github actions.\r\n\r\nI'm not following your question yet, let me get to github actions and then it'll probably make sense, but yes I'm referring to that example when I said:\r\n> In your example instead of `x = cmd` what you need to save is `$?` which is the exit status of the command.\r\n\r\ni.e. `sys.exit($?)` but you must save it right away upon `pytest` completion, since the next command will overwrite it."
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | Way too much output in [this](https://github.com/huggingface/transformers/pull/7989)
This will make it slightly better.
cc @stas00
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7989/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7989/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7989",
"html_url": "https://github.com/huggingface/transformers/pull/7989",
"diff_url": "https://github.com/huggingface/transformers/pull/7989.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7989.patch",
"merged_at": 1603397416000
} |
https://api.github.com/repos/huggingface/transformers/issues/7988 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7988/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7988/comments | https://api.github.com/repos/huggingface/transformers/issues/7988/events | https://github.com/huggingface/transformers/issues/7988 | 727,668,194 | MDU6SXNzdWU3Mjc2NjgxOTQ= | 7,988 | [Good first issue] Documentation links in older docs versions | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834067346,
"node_id": "MDU6TGFiZWwxODM0MDY3MzQ2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation",
"name": "Documentation",
"color": "77cc3b",
"default": false,
"description": ""
},
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Hi, has anyone picked this up? I can give it a go in the coming week or two when I have some let off from releases at work if no-one is doing this. ",
"Nobody has picked this up yet, would love to see such a contribution!",
"Awesome, have our last release till year end freeze this week at work. I'll get in there afterwards. Would like to learn about some of the libraries involved in this project this seems like a good intro. "
] | 1,603 | 1,609 | 1,609 | MEMBER | null | # 🚀 Feature request
This is a documentation request in order to make it easier to find corresponding examples in the documentation.
Good first issue if you want to get acquainted with the docs and how to build docs using Sphinx!
## Current issue
Here's the issue: currently, if one goes to an older documentation version to check the "examples" page, for example, [v2.6.0](https://huggingface.co/transformers/v2.6.0/examples.html), all links point towards the `master` branch.
For example, the link towards `run_tf_glue.py` is the following: https://github.com/huggingface/transformers/blob/master/examples/run_tf_glue.py
As this points towards the `master` branch, it is prone to breaking as files can (and probably will) be moved around as versions come out. It is the case for this example, as the `run_tf_glue.py` script is not in `examples/` anymore, but in `examples/text-classification/`.
I think we need a way to ensure that all links point toward their appropriate version, and the easiest would be to point to a given tag. Since we're looking at the version `v2.6.0`, it makes sense to point towards the tag v2.6.0: https://github.com/huggingface/transformers/blob/v2.6.0/examples/run_tf_glue.py
This way links get frozen in time and redirect to actual files corresponding to their description and behaviour as stated in the docs.
## Resolution
I believe the easiest change would be to use sphinx variables in order to do this. Probably either [rst_epilog](https://www.sphinx-doc.org/en/master/usage/configuration.html#confval-rst_epilog) or [rst_prolog](https://www.sphinx-doc.org/en/master/usage/configuration.html#confval-rst_prolog) could be useful here.
Some useful links: [rst_epilog](https://www.sphinx-doc.org/en/master/usage/configuration.html#confval-rst_epilog), [rst_prolog](https://www.sphinx-doc.org/en/master/usage/configuration.html#confval-rst_prolog) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7988/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7988/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7987 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7987/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7987/comments | https://api.github.com/repos/huggingface/transformers/issues/7987/events | https://github.com/huggingface/transformers/pull/7987 | 727,661,796 | MDExOlB1bGxSZXF1ZXN0NTA4NTA5OTI5 | 7,987 | TFMarian, TFMbart, TFPegasus, TFBlenderbot | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@patrickvonplaten \r\n+ I deleted the `_force_token_id` function, replaced with faster `tf.where` one-liner. (+ added regression test). \r\n+ Replaced unneeded `TFSharedEmbedding` with `tf.keras.layers.Embedding`\r\n+ switched all `.shape` to `shape_list`\r\n\r\nWDYT?"
] | 1,603 | 1,604 | 1,604 | CONTRIBUTOR | null | ### Notes:
- add `TFSinusoidalPositionalEmbeddings`.
- Code structure identical to the corresponding pytorch code -- same classes, implementations differ only slightly.
- Integration tests and common tests, rst updates, for all 4 children. All 4 children run the same common tests as TFBart and at least 1 integration test.
- For pegasus, generations are not identical to PT because Linear layers are slightly different in tf/pt. For Marian, generations are identical.
- Loading will generate 0 warnings.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7987/reactions",
"total_count": 4,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7987/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7987",
"html_url": "https://github.com/huggingface/transformers/pull/7987",
"diff_url": "https://github.com/huggingface/transformers/pull/7987.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7987.patch",
"merged_at": 1604071397000
} |
https://api.github.com/repos/huggingface/transformers/issues/7986 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7986/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7986/comments | https://api.github.com/repos/huggingface/transformers/issues/7986/events | https://github.com/huggingface/transformers/issues/7986 | 727,656,041 | MDU6SXNzdWU3Mjc2NTYwNDE= | 7,986 | T5 Decoder Inputs | {
"login": "alexorona",
"id": 11825654,
"node_id": "MDQ6VXNlcjExODI1NjU0",
"avatar_url": "https://avatars.githubusercontent.com/u/11825654?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexorona",
"html_url": "https://github.com/alexorona",
"followers_url": "https://api.github.com/users/alexorona/followers",
"following_url": "https://api.github.com/users/alexorona/following{/other_user}",
"gists_url": "https://api.github.com/users/alexorona/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexorona/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexorona/subscriptions",
"organizations_url": "https://api.github.com/users/alexorona/orgs",
"repos_url": "https://api.github.com/users/alexorona/repos",
"events_url": "https://api.github.com/users/alexorona/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexorona/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"your 2nd option is correct here: \r\n\r\n```python\r\n# pad_token prepended, eos_token unmasked in attention\r\ndecoder_input_ids: tensor([[0, 2018, 55, 1, 0, 0, 0]])\r\ndecoder_attention_mask: tensor([[1, 1, 1, 1, 0, 0, 0]])\r\n```\r\n\r\n1) You have to start with `decoder_start_token_id = pad_token_id` in T5 and \r\n2) the last EOS token should be attended to because the model \"should learn\" when the sentence is finished.",
"@patrickvonplaten Thanks, Patrick! That makes perfect sense. You're awesome!",
"@patrickvonplaten I noticed that when you pass this to the model:\r\n\r\n```\r\ndecoder_input_ids: tensor([[2018, 55, 1, 0]])\r\ndecoder_attention_mask: tensor([[1, 1, 1, 0])\r\n```\r\n`T5ConditionalGeneration` changes it to this before passing it to the decoder:\r\n\r\n```\r\n# Masks the eos_token\r\n# Correctly prepends an extra pad_id to inputs BUT appends a pad_token to attention_mask\r\ndecoder_input_ids: tensor([[0, 2018, 55, 1, 0]])\r\ndecoder_attention_mask: tensor([[1, 1, 1, 0, 0])\r\n```\r\nSo that you have to actually pass this initially:\r\n\r\n```\r\n# Pass this to model\r\ndecoder_input_ids: tensor([[2018, 55, 1, 0]])\r\ndecoder_attention_mask: tensor([[1, 1, 1, 1])\r\n\r\n# Which turns into this\r\ndecoder_input_ids: tensor([[0, 2018, 55, 1, 0]])\r\ndecoder_attention_mask: tensor([[1, 1, 1, 1, 0])\r\n```",
"Hey @alexorona - sorry I don't quite follow here...could you provide a code example that I can run to see what you mean? :-) "
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | # ❓ Questions & Help
Just confirming that my data preprocessing is perfect for T5. I added a print statement in `T5ConditionalGeneration` for the `decoder_input_ids` and `decoder_attention_mask` just before they're passed to the decoder. Which of these is right?
```
# pad_token prepended, eos_token is not in the sequence
decoder_input_ids: tensor([[0, 2018, 55, 0, 0, 0, 0]])
decoder_attention_mask: tensor([[1, 1, 1, 0, 0, 0, 0]])
# pad_token prepended, eos_token unmasked in attention
decoder_input_ids: tensor([[0, 2018, 55, 1, 0, 0, 0]])
decoder_attention_mask: tensor([[1, 1, 1, 1, 0, 0, 0]])
# pad_token prepended, eos_token masked in attention
decoder_input_ids: tensor([[0, 2018, 55, 1, 0, 0, 0]])
decoder_attention_mask: tensor([[1, 1, 1, 0, 0, 0, 0]])
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7986/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7986/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7985 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7985/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7985/comments | https://api.github.com/repos/huggingface/transformers/issues/7985/events | https://github.com/huggingface/transformers/pull/7985 | 727,642,768 | MDExOlB1bGxSZXF1ZXN0NTA4NDk0Njgz | 7,985 | [setup] require torch>=1.4 | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | closed | false | null | [] | [
"@patrickvonplaten @sshleifer @sgugger Could we solve the errors here to have `torch>=1.3` instead?",
"May be it's simpler to set it to 1.4+ and only if someone asks for it to bother with 1.3?",
"After discussion with the team, we'll take a look at supporting v1.3+ in the coming days, and if it requires too many efforts we'll stick with v1.4+. We'll take this as an opportunity to test the versions we say we support as well (1.3, 1.4, 1.5, 1.6, 1.7) so that the README isn't full of empty promises :slightly_smiling_face:.\r\n\r\nCould you keep your PR as-is for the coming days, and let me come back to you when we've reached a consensus?",
"That's an excellent and clear proposition, @LysandreJik - thank you!",
"> We'll take this as an opportunity to test the versions we say we support as well \r\n\r\nIf I may propose a scheduled CI that runs all tests for each of the supported versions, say, once a week or so. Probably `tf` too.\r\n\r\nI trust you will have the best plan. ",
"ping",
"will try to work on it today - are we sticking to torch 1.3 @LysandreJik ?\r\n\r\nMaybe we could discuss also whether we can do some more general optimizations in the lib then (I think we can safely change the attention masks to bools then)",
"Yes, we are! There's a branch in progress here: https://github.com/huggingface/transformers/tree/previous-torch\r\nFeel free to push fixes onto it directly. I've been planning on doing so right after the TAPAS merge.",
"The branch tests out torch versions going back to v1.3. It's not setup for slow tests right now, and it tests on every commit. I haven't really thought about if this is the best way to do so, but it's certainly easier to debug the failing tests this way.",
"I don't suppose there is a point at resolving the conflict, right? ",
"ping",
"I'd like to get to it as soon as I have some availability.",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread.",
"ping",
"It's on the roadmap!",
"It seems wasteful trying to keep up with the conflicts for more than 6 months. Since it's on the roadmap I think it's safe to close this one now."
] | 1,603 | 1,622 | 1,622 | CONTRIBUTOR | null | I run the non-slow test suite on lower torch versions:
* torch-1.2 and below is definitely a no go - a gazillion of errors in the test suite.
* torch-1.3+ is mostly OK, but:
```
FAILED tests/test_modeling_bart.py::BartHeadTests::test_generate_fp16 - RuntimeError: "argmax_cuda" not implemented for 'Half'
FAILED tests/test_modeling_funnel.py::FunnelModelIntegrationTest::test_inference_tiny_model - OSError: Unable to load weights from pytorch checkpoi...
FAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_model_outputs_equivalence - RuntimeError: Expected object of scalar type Float but got scal...
FAILED tests/test_modeling_lxmert.py::LxmertModelTest::test_lxmert_pretraining - RuntimeError: Expected object of scalar type Float but got scalar ...
FAILED tests/test_modeling_openai.py::OpenAIGPTModelTest::test_model_outputs_equivalence - RuntimeError: Expected object of scalar type Float but g...
FAILED tests/test_modeling_reformer.py::ReformerLocalAttnModelTest::test_reformer_model_fp16_generate - RuntimeError: "argmax_cuda" not implemented...
FAILED tests/test_modeling_reformer.py::ReformerLSHAttnModelTest::test_reformer_model_fp16_forward - RuntimeError: "argmax_cuda" not implemented fo...
FAILED tests/test_modeling_reformer.py::ReformerLSHAttnModelTest::test_reformer_model_fp16_generate - RuntimeError: "argmax_cuda" not implemented f...
```
which could be fixed in the core if desired, but it won't pass as is right now.
* torch-1.4 - has mostly serialization issues (files saved with new pytorch can't be read by the old one)
Hence changing to `torch>=1.4`
@sgugger, @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7985/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7985/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7985",
"html_url": "https://github.com/huggingface/transformers/pull/7985",
"diff_url": "https://github.com/huggingface/transformers/pull/7985.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7985.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/7984 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7984/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7984/comments | https://api.github.com/repos/huggingface/transformers/issues/7984/events | https://github.com/huggingface/transformers/pull/7984 | 727,637,901 | MDExOlB1bGxSZXF1ZXN0NTA4NDkwODQ5 | 7,984 | Reload checkpoint | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | COLLABORATOR | null | # What does this PR do?
This PR fixes a few bugs linked to resuming training from a checkpoint, mainly:
- the progress was not properly displayed (beginning at 0 instead of the step from the checkpoint)
- reloading the optimizer state and scheduler state on TPU was causing an error
Tested on TPU, single-GPU and multi-GPU env.
Fixes #4963
Fixes #7976
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7984/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7984/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7984",
"html_url": "https://github.com/huggingface/transformers/pull/7984",
"diff_url": "https://github.com/huggingface/transformers/pull/7984.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7984.patch",
"merged_at": 1603396133000
} |
https://api.github.com/repos/huggingface/transformers/issues/7983 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7983/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7983/comments | https://api.github.com/repos/huggingface/transformers/issues/7983/events | https://github.com/huggingface/transformers/pull/7983 | 727,604,487 | MDExOlB1bGxSZXF1ZXN0NTA4NDY0MjU3 | 7,983 | add zero shot pipeline tags & examples | {
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"repos_url": "https://api.github.com/users/joeddav/repos",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | # What does this PR do?
Adds the zero shot pipeline tag as well as default examples for a selection of pre-trained MNLI models. cc @patil-suraj | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7983/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7983/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7983",
"html_url": "https://github.com/huggingface/transformers/pull/7983",
"diff_url": "https://github.com/huggingface/transformers/pull/7983.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7983.patch",
"merged_at": 1603393284000
} |
https://api.github.com/repos/huggingface/transformers/issues/7982 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7982/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7982/comments | https://api.github.com/repos/huggingface/transformers/issues/7982/events | https://github.com/huggingface/transformers/issues/7982 | 727,575,895 | MDU6SXNzdWU3Mjc1NzU4OTU= | 7,982 | [s2s test] examples/seq2seq/test_finetune_trainer.py::TestFinetuneTrainer::test_finetune_trainer_slow fails on GPU | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I would fix this by training on much more data (like 1000 obs) and getting the loss down much further.",
"I used more iterations - 6 was enough for 1 gpu, 10 for 2, so I went with 10.\r\n\r\nThis issue will be resolved by https://github.com/huggingface/transformers/pull/7965"
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | This works (cpu / any pytorch):
```
CUDA_VISIBLE_DEVICES="" RUN_SLOW=1 pytest -sv examples/seq2seq/test_finetune_trainer.py::TestFinetuneTrainer::test_finetune_trainer_slow
```
This fails torch-1.5/gpu or 1.6, or nightly:
```
CUDA_VISIBLE_DEVICES="0" RUN_SLOW=1 pytest -sv examples/seq2seq/test_finetune_trainer.py::TestFinetuneTrainer::test_finetune_trainer_slow
```
Same with pytorch-nightly. same with py37 and py38.
Error:
```
{'eval_loss': 5223.2333984375, 'eval_bleu': 0.0, 'eval_gen_len': 1.0, 'epoch': 1.0}
{'eval_loss': 5064.154296875, 'eval_bleu': 0.0, 'eval_gen_len': 1.0, 'epoch': 2.0}
{'eval_loss': 4966.837890625, 'eval_bleu': 0.0, 'eval_gen_len': 3.8, 'epoch': 3.0}
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6/6 [00:03<00:00, 1.55it/s]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 4.69it/s]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:01<00:00, 3.04it/s]FAILED
====================================================================== FAILURES ======================================================================
___________________________________________________ TestFinetuneTrainer.test_finetune_trainer_slow ___________________________________________________
self = <seq2seq.test_finetune_trainer.TestFinetuneTrainer testMethod=test_finetune_trainer_slow>
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:01<00:00, 2.78it/s]
@slow
def test_finetune_trainer_slow(self):
# There is a missing call to __init__process_group somewhere
output_dir = self.run_trainer(eval_steps=2, max_len="128", model_name=MARIAN_MODEL, num_train_epochs=3)
# Check metrics
logs = TrainerState.load_from_json(os.path.join(output_dir, "trainer_state.json")).log_history
eval_metrics = [log for log in logs if "eval_loss" in log.keys()]
first_step_stats = eval_metrics[0]
last_step_stats = eval_metrics[-1]
> assert first_step_stats["eval_bleu"] < last_step_stats["eval_bleu"] # model learned nothing
E AssertionError: assert 0.0 < 0.0
examples/seq2seq/test_finetune_trainer.py:36: AssertionError
```
env:
```
- `transformers` version: 3.4.0
- Platform: Linux-4.15.0-118-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.9
- PyTorch version (GPU?): 1.5.0 (True)
- Tensorflow version (GPU?): 2.3.1 (True)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
@sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7982/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7982/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7981 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7981/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7981/comments | https://api.github.com/repos/huggingface/transformers/issues/7981/events | https://github.com/huggingface/transformers/pull/7981 | 727,512,728 | MDExOlB1bGxSZXF1ZXN0NTA4MzkyMTY4 | 7,981 | Only log total_flos at the end of training | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | COLLABORATOR | null | # What does this PR do?
This PR removes the addition of `total_flos` at each (and every) log, since this kind of pollutes them, and only logs it once and for all at the end of training. Users can still define their own callbacks and do more with that value if they really want to, but from what I understood. @TevenLeScao, that value is mainly necessary at the end.
Also, now that it's not in the metrics anymore, I've reverted the default compute metrics to its previous behavior (sum of all metrics) since it's the documented behavior. (cc @madlag) If we want to really change it, we need to put more examples out there. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7981/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7981/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7981",
"html_url": "https://github.com/huggingface/transformers/pull/7981",
"diff_url": "https://github.com/huggingface/transformers/pull/7981.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7981.patch",
"merged_at": 1603391216000
} |
https://api.github.com/repos/huggingface/transformers/issues/7980 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7980/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7980/comments | https://api.github.com/repos/huggingface/transformers/issues/7980/events | https://github.com/huggingface/transformers/issues/7980 | 727,489,142 | MDU6SXNzdWU3Mjc0ODkxNDI= | 7,980 | 'DistributedDataParallel' object has no attribute 'save_pretrained' | {
"login": "AI678",
"id": 63541083,
"node_id": "MDQ6VXNlcjYzNTQxMDgz",
"avatar_url": "https://avatars.githubusercontent.com/u/63541083?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AI678",
"html_url": "https://github.com/AI678",
"followers_url": "https://api.github.com/users/AI678/followers",
"following_url": "https://api.github.com/users/AI678/following{/other_user}",
"gists_url": "https://api.github.com/users/AI678/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AI678/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AI678/subscriptions",
"organizations_url": "https://api.github.com/users/AI678/orgs",
"repos_url": "https://api.github.com/users/AI678/repos",
"events_url": "https://api.github.com/users/AI678/events{/privacy}",
"received_events_url": "https://api.github.com/users/AI678/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Could you provide the information related to your environment, as well as the code that outputs this error, like it is asked in the issue template?",
"I am facing same issue as the given issu 'DistributedDataParallel' is custom class created by coder that is having base model available in Transformer repo\r\n\r\nWhere in below code that class is \"SentimentClassifier\"\r\n\r\n class SentimentClassifier(nn.Module):\r\n \r\n def __init__(self, n_classes):\r\n super(SentimentClassifier, self).__init__()\r\n self.bert = BertModel.from_pretrained(\"bert-base-multilingual-cased\")\r\n self.drop = nn.Dropout(p=0.3)\r\n self.out = nn.Linear(self.bert.config.hidden_size, n_classes)\r\n \r\n def forward(self, input_ids, attention_mask):\r\n _, pooled_output = self.bert(\r\n input_ids=input_ids,\r\n attention_mask=attention_mask\r\n )\r\n output = self.drop(pooled_output)\r\n return self.out(output)`\r\n\r\nthat is why it is giving error -\r\n\r\n SentimentClassifier object has no attribute 'save_pretrained'\r\n\r\nwhich is correct but I also want to know how can I save that model with my trained weights just like the base model so that I can Import it in few lines and use it.\r\n\r\nonly thing I am able to obtaine from this finetuning is a .bin file \r\nand I am not able to load state dict also\r\n\r\nI am looking for way to save my finetuned model with \"save_pretrained\"",
"Instead of inheriting from `nn.Module` you could inherit from `PreTrainedModel`, which is the abstract class we use for all models, that contains `save_pretrained`. Can you try that?",
"fine-tuning codes I seen on hugging face repo itself shows the same way to do that...so I did that...\r\nbdw I will try as you said and will update here\r\n\r\nhere is the link i refered that from\r\n\r\nhttps://huggingface.co/transformers/notebooks.html\r\n\r\n",
"Hey, My code just like this\r\n\r\n```\r\nfrom transformers import EncoderDecoderModel, BertTokenizer\r\nimport torch\r\nimport argparse\r\nimport os\r\nimport argparse\r\nimport torch.multiprocessing as mp\r\nimport torchvision\r\nimport torchvision.transforms as transforms\r\nimport torch.nn as nn\r\nimport torch.distributed as dist\r\n\r\ndef main():\r\n parser = argparse.ArgumentParser()\r\n args = parser.parse_args()\r\n args.max_src_len = 512\r\n args.max_dst_len = 128\r\n args.gpus = 4\r\n args.world_size = args.gpus\r\n args.epoches = 30\r\n mp.spawn(train, nprocs=args.gpus, args=(args,))\r\n\r\ndef train(gpu, args):\r\n rank = gpu\r\n dist.init_process_group( \r\n \tbackend='nccl', \r\n \t\tinit_method='tcp://127.0.0.1:23456', \r\n \tworld_size=args.world_size, \r\n \trank=rank \r\n ) \r\n torch.manual_seed(0)\r\n model = EncoderDecoderModel.from_pretrained(\"bert2bert\")\r\n torch.cuda.set_device(gpu)\r\n model = model.to(gpu)\r\n optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)\r\n model = nn.parallel.DistributedDataParallel(model, device_ids=[gpu])\r\n dataset_path = 'dataset/example.json'\r\n vocab_path = 'dataset/vocab.txt'\r\n dataset = CNNDataset(dataset_path, vocab_path, args)\r\n train_sampler = torch.utils.data.distributed.DistributedSampler(\r\n \tdataset,\r\n \tnum_replicas=args.world_size,\r\n \trank=rank\r\n )\r\n dataloader = DataLoader(dataset, batch_size=32, shuffle=False, \r\n num_workers=0,\r\n pin_memory=True,\r\n sampler=train_sampler)\r\n cnt = 0\r\n for epoch in range(args.epoches):\r\n for src, dst in dataloader:\r\n\r\n src = torch.stack(src).to(gpu)\r\n dst = torch.stack(dst).to(gpu)\r\n mask = (src!=0)\r\n mask = mask.long()\r\n outputs = model(input_ids=src, attention_mask=mask, decoder_input_ids=dst, labels=dst, return_dict=True)\r\n loss, logits = outputs.loss, outputs.logits\r\n optimizer.zero_grad()\r\n\r\n loss.backward()\r\n\r\n optimizer.step()\r\n\r\n if cnt % 1000 == 0 and gpu == 0 :\r\n model.save_pretrained(\"bert2bert\")\r\n cnt = cnt + 1\r\n\r\n\r\nif __name__ == '__main__':\r\n\r\n main()\r\n\r\n```\r\n@LysandreJik ,@ganeshkharad2",
"I can save this with state_dict. But how can I load it again with from_pretrained method ? ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"> I can save this with state_dict. But how can I load it again with from_pretrained method ?\r\n\r\nHi, i meet the same problem, have you solved this problem? or?",
"> I can save this with state_dict. But how can I load it again with from_pretrained method ?\r\n\r\nHi, Did you find any workaround for this? Thanks in advance.",
"Any solution for this?"
] | 1,603 | 1,667 | 1,609 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
Hey, I want to use EncoderDecoderModel for parallel trainging. When I save my model, I got the following questions. How can I fix this ?
'DistributedDataParallel' object has no attribute 'save_pretrained'
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7980/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7980/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7979 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7979/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7979/comments | https://api.github.com/repos/huggingface/transformers/issues/7979/events | https://github.com/huggingface/transformers/issues/7979 | 727,406,992 | MDU6SXNzdWU3Mjc0MDY5OTI= | 7,979 | How to make some structural changes to the EncoderDecoderModel ? | {
"login": "yhznb",
"id": 50665515,
"node_id": "MDQ6VXNlcjUwNjY1NTE1",
"avatar_url": "https://avatars.githubusercontent.com/u/50665515?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yhznb",
"html_url": "https://github.com/yhznb",
"followers_url": "https://api.github.com/users/yhznb/followers",
"following_url": "https://api.github.com/users/yhznb/following{/other_user}",
"gists_url": "https://api.github.com/users/yhznb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yhznb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yhznb/subscriptions",
"organizations_url": "https://api.github.com/users/yhznb/orgs",
"repos_url": "https://api.github.com/users/yhznb/repos",
"events_url": "https://api.github.com/users/yhznb/events{/privacy}",
"received_events_url": "https://api.github.com/users/yhznb/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @yhznb, \r\n\r\nWe try to mainly use the github issues for bugs in the library. For more customized questions it would be great if you could use https://discuss.huggingface.co/ instead.\r\n\r\nRegarding your question I would just add a layer to `BertLMHeadModel` wherever you want to and then build your `EncoderDecoderModel` from `BertModel` (encoder) & your use-case speciifc `BertLMHeadModel` (decoder).",
"Hey, @patrickvonplaten, I have the same question. Can you provide a example of building the EncoderDecoderModel from BertModel (encoder) & use-case speciifc BertLMHeadModel ? I can't find this in the official document. Thank you very much .",
"I think the model(EncoderDecoderModel) outputs all the hidden states at once . And I want to control it step by step. For example , I want to change the LMhead of Decoder by concatenating another vector. The problem is that the DecoderModel outputs all the hidden states at once. I want to control it for step by step decoding. In other words. I want to use the concatenated vector as the hidden state for generation and use the generated word vector for next step's input. How can I change the model or call the interface properly ? Is it possible under the framework of huggingface ? \r\nThank you very much ! @patrickvonplaten",
"I also raised this in the forum. Does this issue need to be closed ?\r\nThe link is here :\r\nhttps://discuss.huggingface.co/t/control-encoderdecodermodel-to-generate-tokens-step-by-step/1756",
"thank you very much ! @patrickvonplaten ",
"Have you solved your question ? @AI678 I think it is all about changing the LMhaed and the calculation of logits. But I don't know how to change it .",
"Yes , you are right. @yhznb",
"> Hey @yhznb,\r\n> \r\n> We try to mainly use the github issues for bugs in the library. For more customized questions it would be great if you could use https://discuss.huggingface.co/ instead.\r\n> \r\n> Regarding your question I would just add a layer to `BertLMHeadModel` wherever you want to and then build your `EncoderDecoderModel` from `BertModel` (encoder) & your use-case speciifc `BertLMHeadModel` (decoder).\r\n\r\nSorry, I misunderstood what you meant. This is a feature to be developed. So, how long can this feature be developed ? thank you for your response.",
"Hey , I have similar demands. Because I think using only vanilla bert2bert or roberta2roberta is not sufficient for abstractive summarization . For fluency and information richness, we should consider to change the top layer of decoder for further learning.",
"Hey, @patrickvonplaten, when do you want to release that ? ",
"@nlpLover123 , you can control it step by step. But I think it is too slow for a large dataset like cnn-dailymail.\r\nAnd I also want to ask when do you want to release that ? @patrickvonplaten \r\nIf that needs too much time, maybe I would write a encoder_decoder_model from scratch. Because I have little time to wait for that. \r\nThank you very much .\r\n",
"that is too difficult @AI678 .Maybe it is slower that step by step generation.",
"so I just want to make a specific change at the LMhead layer @moonlightarc ",
"@AI678 , I don't think we are planning on releasing such a feature into the library. It's a very specific request and I'd suggest that you try to fork the repo and make the changes according to your needs"
] | 1,603 | 1,604 | 1,604 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
Hey , I use EncoderDecoderModel for abstractive summarization. I load the bert2bert model like this
model=EncoderDecoderModel.from_encoder_decoder_pretrained('bert-base-uncased', 'bert-base-uncased')
And I want to make some structural changes to the output layer of decoder model.
For example, in one decoder step, the output hidden state of bert-decoder is a vector (s). I use another network and I get a vector (w) to make the summarization more accurate. I want to concatenate the two vectors in the output layer and use the final vector to generate a word in the vocabulary.
How can I do this ?
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7979/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7979/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7978 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7978/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7978/comments | https://api.github.com/repos/huggingface/transformers/issues/7978/events | https://github.com/huggingface/transformers/pull/7978 | 727,351,714 | MDExOlB1bGxSZXF1ZXN0NTA4MjU5OTY5 | 7,978 | Disable inference API for t5-11b | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7978/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7978/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7978",
"html_url": "https://github.com/huggingface/transformers/pull/7978",
"diff_url": "https://github.com/huggingface/transformers/pull/7978.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7978.patch",
"merged_at": 1603372118000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/7977 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7977/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7977/comments | https://api.github.com/repos/huggingface/transformers/issues/7977/events | https://github.com/huggingface/transformers/pull/7977 | 727,349,039 | MDExOlB1bGxSZXF1ZXN0NTA4MjU3Nzgy | 7,977 | GPT2 - Remove else branch adding 0 to the hidden state if token_type_embeds is None. | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | MEMBER | null | Currently, when `token_type_embeds` is `None` we set its value to `0` and add it to the triplet `inputs_embeds + position_embeds + token_type_embeds`.
This can be simplified to:
- avoid summing 0 over many elements
- avoid using raw Python scalar value which cannot be traced by TorchScript / ONNX when exporting.
Leading to:
> [ONNXRuntimeError] : 1 : FAIL : TensorRT input: 200 has no shape specified. Please run shape inference on the onnx model first.
Signed-off-by: Morgan Funtowicz <[email protected]> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7977/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7977/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7977",
"html_url": "https://github.com/huggingface/transformers/pull/7977",
"diff_url": "https://github.com/huggingface/transformers/pull/7977.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7977.patch",
"merged_at": 1603377702000
} |
https://api.github.com/repos/huggingface/transformers/issues/7976 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7976/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7976/comments | https://api.github.com/repos/huggingface/transformers/issues/7976/events | https://github.com/huggingface/transformers/issues/7976 | 727,334,859 | MDU6SXNzdWU3MjczMzQ4NTk= | 7,976 | [XLA] Cannot restore from checkpoint on TPU | {
"login": "ksjae",
"id": 17930170,
"node_id": "MDQ6VXNlcjE3OTMwMTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/17930170?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ksjae",
"html_url": "https://github.com/ksjae",
"followers_url": "https://api.github.com/users/ksjae/followers",
"following_url": "https://api.github.com/users/ksjae/following{/other_user}",
"gists_url": "https://api.github.com/users/ksjae/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ksjae/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ksjae/subscriptions",
"organizations_url": "https://api.github.com/users/ksjae/orgs",
"repos_url": "https://api.github.com/users/ksjae/repos",
"events_url": "https://api.github.com/users/ksjae/events{/privacy}",
"received_events_url": "https://api.github.com/users/ksjae/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Pinging @sgugger "
] | 1,603 | 1,603 | 1,603 | NONE | null | ## Environment info
- `transformers` version: 3.4.0
- Platform: Linux-4.9.0-13-amd64-x86_64-with-debian-9.13
- Python version: 3.6.10
- PyTorch version (GPU?): 1.8.0a0+e5ed037 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No (but using TPU)
- Using distributed or parallel set-up in script?: using xla_spawn.py
### Who can help
@LysandreJik @sgugger @TevenLeScao
## Information
Model I am using (Bert, XLNet ...): GPT2
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: examples/language-modeling/run_language_modeling.py but with HF datasets
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: Text generation
## To reproduce
Steps to reproduce the behavior:
1. Modify examples/language-modeling/run_language_modeling.py to below
```
import logging
import math
import os
import glob
import datasets
from dataclasses import dataclass, field
from typing import Optional
from datasets import list_datasets, load_dataset
from transformers import (
CONFIG_MAPPING,
MODEL_WITH_LM_HEAD_MAPPING,
AutoConfig,
AutoModelWithLMHead,
AutoTokenizer,
DataCollatorForLanguageModeling,
DataCollatorForPermutationLanguageModeling,
HfArgumentParser,
LineByLineTextDataset,
PreTrainedTokenizer,
TextDataset,
Trainer,
TrainingArguments,
set_seed,
)
logger = logging.getLogger(__name__)
MODEL_CONFIG_CLASSES = list(MODEL_WITH_LM_HEAD_MAPPING.keys())
MODEL_TYPES = tuple(conf.model_type for conf in MODEL_CONFIG_CLASSES)
@dataclass
class ModelArguments:
"""
Arguments pertaining to which model/config/tokenizer we are going to fine-tune, or train from scratch.
"""
model_name_or_path: Optional[str] = field(
default=None,
metadata={
"help": "The model checkpoint for weights initialization. Leave None if you want to train a model from scratch."
},
)
model_type: Optional[str] = field(
default=None,
metadata={"help": "If training from scratch, pass a model type from the list: " + ", ".join(MODEL_TYPES)},
)
config_name: Optional[str] = field(
default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"}
)
tokenizer_name: Optional[str] = field(
default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"}
)
cache_dir: Optional[str] = field(
default=None, metadata={"help": "Where do you want to store the pretrained models downloaded from s3"}
)
@dataclass
class DataTrainingArguments:
"""
Arguments pertaining to what data we are going to input our model for training and eval.
"""
train_data_file: Optional[str] = field(
default=None, metadata={"help": "The input training data file (a text file)."}
)
eval_data_file: Optional[str] = field(
default=None,
metadata={"help": "An optional input evaluation data file to evaluate the perplexity on (a text file)."},
)
line_by_line: bool = field(
default=False,
metadata={"help": "Whether distinct lines of text in the dataset are to be handled as distinct sequences."},
)
mlm: bool = field(
default=False, metadata={"help": "Train with masked-language modeling loss instead of language modeling."}
)
mlm_probability: float = field(
default=0.15, metadata={"help": "Ratio of tokens to mask for masked language modeling loss"}
)
plm_probability: float = field(
default=1 / 6,
metadata={
"help": "Ratio of length of a span of masked tokens to surrounding context length for permutation language modeling."
},
)
max_span_length: int = field(
default=5, metadata={"help": "Maximum length of a span of masked tokens for permutation language modeling."}
)
block_size: int = field(
default=-1,
metadata={
"help": "Optional input sequence length after tokenization."
"The training dataset will be truncated in block of this size for training."
"Default to the model max input length for single sentence inputs (take into account special tokens)."
},
)
overwrite_cache: bool = field(
default=False, metadata={"help": "Overwrite the cached training and evaluation sets"}
)
arrow: bool = field(
default=True,
metadata={
"help": "Use Arrow-based HF NLP for optimization."
},
)
def get_dataset(
args: DataTrainingArguments,
tokenizer: PreTrainedTokenizer,
evaluate: bool = False,
cache_dir: Optional[str] = "./cache",
):
tokenizer.pad_token = "<|endoftext|>"
tokenizer._pad_token = "<|endoftext|>"
#tokenizer.pad_token_id = 50256
file_path = args.eval_data_file if evaluate else args.train_data_file
if True:
dataset = datasets.load_from_disk(file_path)
dataset.set_format(type='torch', columns=['input_ids'])
return dataset
if False:
dataset = load_dataset("text", data_files=[file_path], split='train')
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True,
truncation=True, max_length=args.block_size), batched=True)
dataset.set_format(type='torch', columns=['input_ids'])
dataset.save_to_disk(file_path+'.arrow')
return dataset
if args.line_by_line:
return LineByLineTextDataset(tokenizer=tokenizer, file_path=file_path, block_size=args.block_size)
else:
return TextDataset(
tokenizer=tokenizer,
file_path=file_path,
block_size=args.block_size,
overwrite_cache=args.overwrite_cache,
cache_dir=cache_dir,
)
"""
dataset = load_dataset("text", data_files=file_path, split="train")
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True,
truncation=True, max_length=args.block_size), batched=True)
dataset.set_format(type='torch', columns=['input_ids'])
return dataset
"""
def main():
# See all possible arguments in src/transformers/training_args.py
# or by passing the --help flag to this script.
# We now keep distinct sets of args, for a cleaner separation of concerns.
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
if data_args.eval_data_file is None and training_args.do_eval:
raise ValueError(
"Cannot do evaluation without an evaluation data file. Either supply a file to --eval_data_file "
"or remove the --do_eval argument."
)
if (
os.path.exists(training_args.output_dir)
and os.listdir(training_args.output_dir)
and training_args.do_train
and not training_args.overwrite_output_dir
):
raise ValueError(
f"Output directory ({training_args.output_dir}) already exists and is not empty. Use --overwrite_output_dir to overcome."
)
# Setup logging
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
level=logging.INFO if training_args.local_rank in [-1, 0] else logging.WARN,
)
logger.warning(
"Process rank: %s, device: %s, n_gpu: %s, distributed training: %s, 16-bits training: %s",
training_args.local_rank,
training_args.device,
training_args.n_gpu,
bool(training_args.local_rank != -1),
training_args.fp16,
)
logger.info("Training/evaluation parameters %s", training_args)
# Set seed
set_seed(training_args.seed)
# Load pretrained model and tokenizer
#
# Distributed training:
# The .from_pretrained methods guarantee that only one local process can concurrently
# download model & vocab.
if model_args.config_name:
config = AutoConfig.from_pretrained(model_args.config_name, cache_dir=model_args.cache_dir)
elif model_args.model_name_or_path:
config = AutoConfig.from_pretrained(model_args.model_name_or_path, cache_dir=model_args.cache_dir)
else:
config = CONFIG_MAPPING[model_args.model_type]()
logger.warning("You are instantiating a new config instance from scratch.")
if model_args.tokenizer_name:
tokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name, cache_dir=model_args.cache_dir)
elif model_args.model_name_or_path:
tokenizer = AutoTokenizer.from_pretrained(model_args.model_name_or_path, cache_dir=model_args.cache_dir)
else:
raise ValueError(
"You are instantiating a new tokenizer from scratch. This is not supported, but you can do it from another script, save it,"
"and load it from here, using --tokenizer_name"
)
tokenizer.pad_token = "<|endoftext|>"
tokenizer._pad_token = "<|endoftext|>"
if model_args.model_name_or_path:
model = AutoModelWithLMHead.from_pretrained(
model_args.model_name_or_path,
from_tf=bool(".ckpt" in model_args.model_name_or_path),
config=config,
cache_dir=model_args.cache_dir,
)
else:
logger.info("Training new model from scratch")
model = AutoModelWithLMHead.from_config(config)
model.resize_token_embeddings(len(tokenizer))
if config.model_type in ["bert", "roberta", "distilbert", "camembert"] and not data_args.mlm:
raise ValueError(
"BERT and RoBERTa-like models do not have LM heads but masked LM heads. They must be run using the"
"--mlm flag (masked language modeling)."
)
if data_args.block_size <= 0:
data_args.block_size = tokenizer.max_len
# Our input block size will be the max possible for the model
else:
data_args.block_size = min(data_args.block_size, tokenizer.max_len)
# Get datasets
train_dataset = (
get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_train else None
)
eval_dataset = (
get_dataset(data_args, tokenizer=tokenizer, evaluate=True, cache_dir=model_args.cache_dir)
if training_args.do_eval
else None
)
if config.model_type == "xlnet":
data_collator = DataCollatorForPermutationLanguageModeling(
tokenizer=tokenizer,
plm_probability=data_args.plm_probability,
max_span_length=data_args.max_span_length,
)
else:
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer, mlm=data_args.mlm, mlm_probability=data_args.mlm_probability
)
# Initialize our Trainer
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
prediction_loss_only=True,
)
# Training
if training_args.do_train:
model_path = (
model_args.model_name_or_path
if model_args.model_name_or_path is not None and os.path.isdir(model_args.model_name_or_path)
else None
)
trainer.train(model_path=model_path)
trainer.save_model()
# For convenience, we also re-save the tokenizer to the same directory,
# so that you can share your model easily on huggingface.co/models =)
if trainer.is_world_master():
tokenizer.save_pretrained(training_args.output_dir)
# Evaluation
results = {}
if training_args.do_eval:
logger.info("*** Evaluate ***")
eval_output = trainer.evaluate()
perplexity = math.exp(eval_output["eval_loss"])
result = {"perplexity": perplexity}
output_eval_file = os.path.join(training_args.output_dir, "eval_results_lm.txt")
if trainer.is_world_master():
with open(output_eval_file, "w") as writer:
logger.info("***** Eval results *****")
for key in sorted(result.keys()):
logger.info(" %s = %s", key, str(result[key]))
writer.write("%s = %s\n" % (key, str(result[key])))
results.update(result)
return results
def _mp_fn(index):
# For xla_spawn (TPUs)
main()
if __name__ == "__main__":
main()
```
2. set torch-xla-nightly Conda & set env
3. run script from checkpoint (replace dataset, since I cannot upload 48 GB worth of arrow files)
```
XLA_USE_BF16=1 python3 examples/xla_spawn.py --num_cores 8 examples/language-modeling/run_language_modeling.py --output_dir=kogpt1 --model_type=gpt2 --do_train --train_data_file=/home/ksjcom0705_gmail_com/NEWS_ARROW --overwrite_output_dir --per_device_train_batch_size=6 --save_steps 10000 --num_train_epochs=1 --block_size 2048 --eval_steps 10000 --logging_steps=10000 --tokenizer_name /home/ksjcom0705_gmail_com/kotok --model_name_or_path=kogpt1/checkpoint-1000
```
The error is this:
```
Exception in device=TPU:3: don't know how to restore data location of torch.FloatStorage (tagged with xla:0)
Exception in device=TPU:5: don't know how to restore data location of torch.FloatStorage (tagged with xla:0)
Exception in device=TPU:6: don't know how to restore data location of torch.FloatStorage (tagged with xla:0)
Exception in device=TPU:1: don't know how to restore data location of torch.FloatStorage (tagged with xla:0)
Exception in device=TPU:0: don't know how to restore data location of torch.FloatStorage (tagged with xla:1)
Exception in device=TPU:4: don't know how to restore data location of torch.FloatStorage (tagged with xla:0)
Exception in device=TPU:7: don't know how to restore data location of torch.FloatStorage (tagged with xla:0)
Exception in device=TPU:2: don't know how to restore data location of torch.FloatStorage (tagged with xla:0)
Traceback (most recent call last):
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 330, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 324, in _start_fn
fn(gindex, *args)
File "/home/ksjcom0705_gmail_com/transformers/examples/language-modeling/run_language_modeling.py", line 332, in _mp_fn
main()
File "/home/ksjcom0705_gmail_com/transformers/examples/language-modeling/run_language_modeling.py", line 300, in main
trainer.train(model_path=model_path)
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/transformers/trainer.py", line 629, in train
torch.load(os.path.join(model_path, "optimizer.pt"), map_location=self.args.device)
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch/serialization.py", line 592, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch/serialization.py", line 851, in _load
result = unpickler.load()
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch/serialization.py", line 843, in persistent_load
load_tensor(data_type, size, key, _maybe_decode_ascii(location))
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch/serialization.py", line 832, in load_tensor
loaded_storages[key] = restore_location(storage, location)
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch/serialization.py", line 812, in restore_location
return default_restore_location(storage, str(map_location))
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch/serialization.py", line 180, in default_restore_location
+ location + ")")
RuntimeError: don't know how to restore data location of torch.FloatStorage (tagged with xla:0)
Traceback (most recent call last):
Traceback (most recent call last):
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 330, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 324, in _start_fn
fn(gindex, *args)
File "/home/ksjcom0705_gmail_com/transformers/examples/language-modeling/run_language_modeling.py", line 332, in _mp_fn
main()
File "/home/ksjcom0705_gmail_com/transformers/examples/language-modeling/run_language_modeling.py", line 300, in main
trainer.train(model_path=model_path)
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/transformers/trainer.py", line 629, in train
torch.load(os.path.join(model_path, "optimizer.pt"), map_location=self.args.device)
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 330, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 324, in _start_fn
fn(gindex, *args)
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch/serialization.py", line 592, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch/serialization.py", line 851, in _load
result = unpickler.load()
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch/serialization.py", line 843, in persistent_load
load_tensor(data_type, size, key, _maybe_decode_ascii(location))
File "/home/ksjcom0705_gmail_com/transformers/examples/language-modeling/run_language_modeling.py", line 332, in _mp_fn
```
(More of same thing below)
## Expected behavior
Run normally
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7976/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7976/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7975 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7975/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7975/comments | https://api.github.com/repos/huggingface/transformers/issues/7975/events | https://github.com/huggingface/transformers/pull/7975 | 727,308,847 | MDExOlB1bGxSZXF1ZXN0NTA4MjI0OTAz | 7,975 | Fixing the "translation", "translation_XX_to_YY" pipelines. | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"But If 1 models implies 1 task, how should we cope with models that are able to do multiple things (like `t5-base`) ? \r\n\r\nI think `datasets` does it correctly in that it does not make any choice on your behalf, but instead raises an Exception with your available choices.",
"> But If 1 models implies 1 task, how should we cope with models that are able to do multiple things (like `t5-base`) ?\r\n> \r\n> I think `datasets` does it correctly in that it does not make any choice on your behalf, but instead raises an Exception with your available choices.\r\n\r\nI think T5Base should default to a `ForConditionalPipeline` (which is more or less the `Text2TextPipeline` we have right now). Then the user could either provide a pipeline config that makes sure \"summarization\" or \"translation\" params are used for the pipeline. Note: In the end of the day all Seq2Seq pipelines are exactly the same -> they are all based on `.generate()` and they only differ on which params (`max_length`, `prefix`, ...) are used. Or/And we create a very shallow \"alias\" pipeline named `class TranslationPipeline(ConditionalGenerationPipeline)` that only overwrites the config params similar to what we do here: https://github.com/huggingface/transformers/blob/901e9b8eda2fe88af717f960ddc05cac1803679b/src/transformers/pipelines.py#L568 right now. \r\n\r\nMaybe @mfuntowicz can also give a bit more context on the Pipeline v2 vision here."
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | # What does this PR do
Actually make the "translation", "translation_XX_to_YY" task behave correctly.
Background:
- Currently "translation_cn_to_ar" does not work. (only 3 pairs are
supported)
- Some models, contain in their config the correct values for the (src,
tgt) pair they can translate. It's usually just one pair, and we can
infer it automatically from the `model.config.task_specific_params`. If
it's not defined we can still probably load the TranslationPipeline
nevertheless.
Proposed fix:
- A simplified version of what could become more general which is
a `parametrized` task. "translation" + (src, tgt) in this instance
it what we need in the general case. The way we go about it for now
is simply parsing "translation_XX_to_YY". If cases of parametrized task arise
we should preferably go in something closer to what `datasets` propose
which is having a secondary argument `task_options`? that will be close
to what that task requires.
- Should be backward compatible in all cases for instance
`pipeline(task="translation_en_to_de") should work out of the box.
- Should provide a warning when a specific translation pair has been
selected on behalf of the user using
`model.config.task_specific_params`.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@LysandreJik
@patrickvonplaten
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7975/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7975/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7975",
"html_url": "https://github.com/huggingface/transformers/pull/7975",
"diff_url": "https://github.com/huggingface/transformers/pull/7975.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7975.patch",
"merged_at": 1603379782000
} |
https://api.github.com/repos/huggingface/transformers/issues/7974 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7974/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7974/comments | https://api.github.com/repos/huggingface/transformers/issues/7974/events | https://github.com/huggingface/transformers/issues/7974 | 727,303,677 | MDU6SXNzdWU3MjczMDM2Nzc= | 7,974 | TrainingArguments error : TypeError: __init__() got an unexpected keyword argument 'evaluation_strategy' | {
"login": "Fourha",
"id": 49142670,
"node_id": "MDQ6VXNlcjQ5MTQyNjcw",
"avatar_url": "https://avatars.githubusercontent.com/u/49142670?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Fourha",
"html_url": "https://github.com/Fourha",
"followers_url": "https://api.github.com/users/Fourha/followers",
"following_url": "https://api.github.com/users/Fourha/following{/other_user}",
"gists_url": "https://api.github.com/users/Fourha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Fourha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Fourha/subscriptions",
"organizations_url": "https://api.github.com/users/Fourha/orgs",
"repos_url": "https://api.github.com/users/Fourha/repos",
"events_url": "https://api.github.com/users/Fourha/events{/privacy}",
"received_events_url": "https://api.github.com/users/Fourha/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"It was recently added, so you may need to upgrade your version of transformers.",
"> It was recently added, so you may need to upgrade your version of transformers.\r\n\r\nThanks,it works!\r\nI use transformers in \r\n\r\n> kaggle notebook\r\n\r\n , maybe there is some bug in such online applications. I once tried to update transformer to see whether this error would not appear, but it was not work. \r\n\r\nThis time I close my browser and restart the kaggle notebook. Everything going well ! ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I've just successfully solved this problem using this command.\r\n\r\n`pip install transformers --upgrade`\r\n"
] | 1,603 | 1,656 | 1,609 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
when I use TrainingArguments (transformer 3,3,1) , it emerge the error TypeError: __init__() got an unexpected keyword argument 'evaluation_strategy'. I wonder why I 've got this error.
these are my code:
> training_args = TrainingArguments(
> output_dir="./no_num_pretrain_model",
> overwrite_output_dir=True,
> num_train_epochs=epochs,
> per_device_train_batch_size=16,
> per_device_eval_batch_size=32,
> do_train = True,
> do_eval = True,
> evaluation_strategy="steps",
> logging_steps = 10,
> save_steps=2000,
> eval_steps=10,
> )
>
>
> trainer = Trainer(
> model=model,
> args=training_args,
> data_collator=data_collator,
> train_dataset=train_dataset,
> eval_dataset=val_dataset, # evaluation dataset
> optimizers =(optimizer,scheduler)
> )
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7974/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7974/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7973 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7973/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7973/comments | https://api.github.com/repos/huggingface/transformers/issues/7973/events | https://github.com/huggingface/transformers/pull/7973 | 727,289,812 | MDExOlB1bGxSZXF1ZXN0NTA4MjA5Nzk5 | 7,973 | support relative path for best_model_checkpoint | {
"login": "HaebinShin",
"id": 6428529,
"node_id": "MDQ6VXNlcjY0Mjg1Mjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6428529?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HaebinShin",
"html_url": "https://github.com/HaebinShin",
"followers_url": "https://api.github.com/users/HaebinShin/followers",
"following_url": "https://api.github.com/users/HaebinShin/following{/other_user}",
"gists_url": "https://api.github.com/users/HaebinShin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HaebinShin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HaebinShin/subscriptions",
"organizations_url": "https://api.github.com/users/HaebinShin/orgs",
"repos_url": "https://api.github.com/users/HaebinShin/repos",
"events_url": "https://api.github.com/users/HaebinShin/events{/privacy}",
"received_events_url": "https://api.github.com/users/HaebinShin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #7431
When I give a relative path to `output_dir`, it raises some error at line 1222.
https://github.com/huggingface/transformers/blob/83481056921296fadbdc86cd51c157a9a9327946/src/transformers/trainer.py#L1205-L1227
If I give './path1/path2' to `output_dir`, `checkpoints_sorted` becomes 'path1/path2' by Path library on line 1208.
But `self.state.best_model_checkpoint` is still './path1/path2'.
So it will raise error as below.
```
Traceback (most recent call last):
File "../finetune.py", line 369, in <module>
main(**vars(args))
File "../finetune.py", line 315, in main
device)
File "../finetune.py", line 120, in train
trainer.train(optim_pretrained_path)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 803, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 860, in _maybe_log_save_evaluate
self._save_checkpoint(model, trial, metrics=metrics)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 918, in _save_checkpoint
self._rotate_checkpoints(use_mtime=True)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1235, in _rotate_checkpoints
checkpoints_sorted = self._sorted_checkpoints(use_mtime=use_mtime)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1223, in _sorted_checkpoints
best_model_index = checkpoints_sorted.index(self.state.best_model_checkpoint)
ValueError: './results/use_pretrained_test/checkpoint-1162' is not in list
```
So I resolve this error by using Path for `self.state.best_model_checkpoint`.
Any other idea is also welcome. please check it
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7973/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7973/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7973",
"html_url": "https://github.com/huggingface/transformers/pull/7973",
"diff_url": "https://github.com/huggingface/transformers/pull/7973.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7973.patch",
"merged_at": 1603367732000
} |
https://api.github.com/repos/huggingface/transformers/issues/7972 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7972/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7972/comments | https://api.github.com/repos/huggingface/transformers/issues/7972/events | https://github.com/huggingface/transformers/issues/7972 | 727,261,885 | MDU6SXNzdWU3MjcyNjE4ODU= | 7,972 | Unable to load UnifiedQA models, tf throws DataLossError | {
"login": "tshrjn",
"id": 8372098,
"node_id": "MDQ6VXNlcjgzNzIwOTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8372098?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tshrjn",
"html_url": "https://github.com/tshrjn",
"followers_url": "https://api.github.com/users/tshrjn/followers",
"following_url": "https://api.github.com/users/tshrjn/following{/other_user}",
"gists_url": "https://api.github.com/users/tshrjn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tshrjn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tshrjn/subscriptions",
"organizations_url": "https://api.github.com/users/tshrjn/orgs",
"repos_url": "https://api.github.com/users/tshrjn/repos",
"events_url": "https://api.github.com/users/tshrjn/events{/privacy}",
"received_events_url": "https://api.github.com/users/tshrjn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"As far as I can see, the model you are trying load doesn't have a compliant format. We cannot help more without the full error stack.",
"Edit, added the full error stack.\r\nAlso, this is fairly easy to reproduce."
] | 1,603 | 1,603 | 1,603 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.4.0
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0a0+b31f58d (False)
- Tensorflow version (GPU?): 2.3.0 (False)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
Probably
T5: @patrickvonplaten
tensorflow: @jplu
## Information
Model I am using UnifiedQA (based on T5):
The problem arises when Using the model loading code provided in the [UnifiedQA Readme](https://github.com/allenai/unifiedqa#using-the-models-in-pytorchhuggingface), shown below. Unable to load models throws DataLossError.
Code:
```
from transformers import T5Config, T5Tokenizer, T5ForConditionalGeneration
from transformers.modeling_t5 import load_tf_weights_in_t5
base_model = "t5-small"
tokenizer = T5Tokenizer.from_pretrained(base_model)
model = T5ForConditionalGeneration(T5Config.from_pretrained(base_model))
load_tf_weights_in_t5(model, None, "./models/unifiedqa-small/")
```
Error:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/py_checkpoint_reader.py in NewCheckpointReader(filepattern)
94 try:
---> 95 return CheckpointReader(compat.as_bytes(filepattern))
96 # TODO(b/143319754): Remove the RuntimeError casting logic once we resolve the
RuntimeError: Unable to open table file /content/base: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator?
During handling of the above exception, another exception occurred:
DataLossError Traceback (most recent call last)
5 frames
<ipython-input-27-b28dfb350abf> in <module>()
7
8 model_path = './base/' #@param ['./unifiedqa-base/', './base/']
----> 9 load_tf_weights_in_t5(model, None, model_path)
10
11 # tokenizer = T5Tokenizer.from_pretrained('t5-base')
/usr/local/lib/python3.6/dist-packages/transformers/modeling_t5.py in load_tf_weights_in_t5(model, config, tf_checkpoint_path)
78 logger.info("Converting TensorFlow checkpoint from {}".format(tf_path))
79 # Load weights from TF model
---> 80 init_vars = tf.train.list_variables(tf_path)
81 names = []
82 tf_weights = {}
/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/checkpoint_utils.py in list_variables(ckpt_dir_or_file)
96 List of tuples `(name, shape)`.
97 """
---> 98 reader = load_checkpoint(ckpt_dir_or_file)
99 variable_map = reader.get_variable_to_shape_map()
100 names = sorted(variable_map.keys())
/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/checkpoint_utils.py in load_checkpoint(ckpt_dir_or_file)
65 raise ValueError("Couldn't find 'checkpoint' file or checkpoints in "
66 "given directory %s" % ckpt_dir_or_file)
---> 67 return py_checkpoint_reader.NewCheckpointReader(filename)
68
69
/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/py_checkpoint_reader.py in NewCheckpointReader(filepattern)
97 # issue with throwing python exceptions from C++.
98 except RuntimeError as e:
---> 99 error_translator(e)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/py_checkpoint_reader.py in error_translator(e)
42 raise errors_impl.InvalidArgumentError(None, None, error_message)
43 elif 'Unable to open table file' in error_message:
---> 44 raise errors_impl.DataLossError(None, None, error_message)
45 elif 'Failed to find the saved tensor slices' in error_message:
46 raise errors_impl.InternalError(None, None, error_message)
DataLossError: Unable to open table file /path/to/models/unifiedqa-small: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator?
```
Perhaps might be an issue in the code for loading from tensorflow, [related issue](https://github.com/tensorflow/models/issues/2676).
Also, put an [issue](https://github.com/allenai/unifiedqa/issues/6) on the unifiedQA repo. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7972/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7972/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7971 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7971/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7971/comments | https://api.github.com/repos/huggingface/transformers/issues/7971/events | https://github.com/huggingface/transformers/pull/7971 | 727,255,674 | MDExOlB1bGxSZXF1ZXN0NTA4MTgxNzcz | 7,971 | FillMaskPipeline: support passing top_k on __call__ | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Ok so should be ready to merge after a quick review @LysandreJik @sgugger!"
] | 1,603 | 1,603 | 1,603 | MEMBER | null | Also change name from topk to top_k for more consistency with the TextGenerationPipeline | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7971/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7971/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7971",
"html_url": "https://github.com/huggingface/transformers/pull/7971",
"diff_url": "https://github.com/huggingface/transformers/pull/7971.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7971.patch",
"merged_at": 1603385666000
} |
https://api.github.com/repos/huggingface/transformers/issues/7970 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7970/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7970/comments | https://api.github.com/repos/huggingface/transformers/issues/7970/events | https://github.com/huggingface/transformers/pull/7970 | 727,243,776 | MDExOlB1bGxSZXF1ZXN0NTA4MTcxOTMz | 7,970 | [tests|tokenizers] Refactoring pipelines test backbone - Small tokenizers improvements - General tests speedups | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Tagging a few people working on the pipelines and @sshleifer because I've added additional imports of TF Bart in `modeling_tf_auto` to make the pipelines happy.",
"Ok. great, thanks @sgugger and @sshleifer.\r\n\r\nI took the occasion to speed up the CI test suite (cc @stas00) by:\r\n- spinning out the pipeline test in a separate job\r\n- reducing the dual framework (tf+pt) tests overhead by focusing them on the PT+TF cross interactions (adding new tests at the same time) and removing the double testing with the tf and pytorch standalone tests.\r\n\r\nReady to merge imo.",
"Whoah! this is amazing - the slowest job is now the fastest! Thank you!!!\r\n\r\nI think there is only one potential issue with it - the half-baked until now codecov report is now completely useless since it no longer covers all tests so just as well remove it completely.",
"@stas00 Did just that https://github.com/huggingface/transformers/commit/829b9f8cc321aa28396e6203e0f21eed26b132f7\r\n\r\nRemoved codecov from the repo as well.",
"It's still there ;)\r\n```\r\n.circleci/config.yml: - run: pip install codecov pytest-cov\r\n.circleci/config.yml: - run: RUN_PT_TF_CROSS_TESTS=1 python -m pytest -n 8 --dist=loadfile -rA -s ./tests/ -m is_pt_tf_cross_test --cov --durations=0 | tee output.txt\r\n.circleci/config.yml: - run: codecov\r\n```"
] | 1,603 | 1,603 | 1,603 | MEMBER | null | # What does this PR do?
This PR refactor the pipeline tests to split them in smaller parts more easy to iterate on.
There is now:
- one common backbone for testing pipelines in `test_pipeline_common.py` with two Mixin that can be used depending on the test: `CustomInputPipelineCommonMixin` and `MonoInputPipelineCommonMixin`. The later provide a standard `_test_pipeline(nlp: Pipeline)` method while the former require to write a custom test pipeline method.
- one test file per specific pipeline inheriting fro the above backbone.
Small fixes:
- the special token ids can now be set in the tokenizers
- added `convert_tokens_to_string(List[str]) -> str` to the Fast Tokenizers
- added a `tokenizer.vocab` property in Fast Tokenizers (alias to `tokenizer.get_vocab()`)
- `tokenizer.decode()` now accept also PyTorch, TensorFlow, Numpy Tensors/arrays as input
- gathered a few docstring in the parent class for tokenizers
- also fix the Dialog Pipeline #5516
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7970/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7970/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7970",
"html_url": "https://github.com/huggingface/transformers/pull/7970",
"diff_url": "https://github.com/huggingface/transformers/pull/7970.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7970.patch",
"merged_at": 1603461499000
} |
https://api.github.com/repos/huggingface/transformers/issues/7969 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7969/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7969/comments | https://api.github.com/repos/huggingface/transformers/issues/7969/events | https://github.com/huggingface/transformers/pull/7969 | 727,239,526 | MDExOlB1bGxSZXF1ZXN0NTA4MTY4NDcx | 7,969 | Add model_cards | {
"login": "brandenchan",
"id": 33759007,
"node_id": "MDQ6VXNlcjMzNzU5MDA3",
"avatar_url": "https://avatars.githubusercontent.com/u/33759007?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brandenchan",
"html_url": "https://github.com/brandenchan",
"followers_url": "https://api.github.com/users/brandenchan/followers",
"following_url": "https://api.github.com/users/brandenchan/following{/other_user}",
"gists_url": "https://api.github.com/users/brandenchan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/brandenchan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brandenchan/subscriptions",
"organizations_url": "https://api.github.com/users/brandenchan/orgs",
"repos_url": "https://api.github.com/users/brandenchan/repos",
"events_url": "https://api.github.com/users/brandenchan/events{/privacy}",
"received_events_url": "https://api.github.com/users/brandenchan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"I think we should also add some meta-information (see [here](https://github.com/huggingface/model_card)) to the models:\r\n\r\n```\r\n---\r\nlanguage: de\r\nlicense: mit\r\ndatasets:\r\n- wikipedia\r\n---\r\n```\r\n\r\nAnd maybe we need to add `masked-lm` to the `tags` array, so that we can use the inference widget on the model page to do some nice masking experiments :)",
"Awesome collaboration btw :heart: :hugs: ",
"> And maybe we need to add `masked-lm` to the `tags` array, so that we can use the inference widget on the model page to do some nice masking experiments :)\r\n\r\nShouldn't need to (in theory)",
" ```json\r\n\"architectures\": [\r\n \"BertForMaskedLM\"\r\n ],\r\n```\r\n\r\nis currently missing in our BERT configs -> @brandenchan would it be possible that you add it :hugs: ",
"> ```json\r\n> \"architectures\": [\r\n> \"BertForMaskedLM\"\r\n> ],\r\n> ```\r\n> \r\n> is currently missing in our BERT configs -> @brandenchan would it be possible that you add it 🤗\r\n\r\n@stefan-it Done! Out of interest, what's the difference between BertForMaskedLM and BertForPretraining?",
"If I remember correctly BertForPretraining loads a LM head and a NSP head (both heads trained during pretraining). Am I correct @LysandreJik?"
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | Add model cards for new German language models. Paper [here](https://arxiv.org/abs/2010.10906) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7969/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7969/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7969",
"html_url": "https://github.com/huggingface/transformers/pull/7969",
"diff_url": "https://github.com/huggingface/transformers/pull/7969.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7969.patch",
"merged_at": 1603974595000
} |
https://api.github.com/repos/huggingface/transformers/issues/7968 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7968/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7968/comments | https://api.github.com/repos/huggingface/transformers/issues/7968/events | https://github.com/huggingface/transformers/pull/7968 | 727,219,499 | MDExOlB1bGxSZXF1ZXN0NTA4MTUyMjUw | 7,968 | Herbert tokenizer auto load | {
"login": "rmroczkowski",
"id": 64909124,
"node_id": "MDQ6VXNlcjY0OTA5MTI0",
"avatar_url": "https://avatars.githubusercontent.com/u/64909124?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rmroczkowski",
"html_url": "https://github.com/rmroczkowski",
"followers_url": "https://api.github.com/users/rmroczkowski/followers",
"following_url": "https://api.github.com/users/rmroczkowski/following{/other_user}",
"gists_url": "https://api.github.com/users/rmroczkowski/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rmroczkowski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rmroczkowski/subscriptions",
"organizations_url": "https://api.github.com/users/rmroczkowski/orgs",
"repos_url": "https://api.github.com/users/rmroczkowski/repos",
"events_url": "https://api.github.com/users/rmroczkowski/events{/privacy}",
"received_events_url": "https://api.github.com/users/rmroczkowski/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | Adding HerbertTokenizer imports for autoloading proper tokenizer.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik.
@julien-c | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7968/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7968/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7968",
"html_url": "https://github.com/huggingface/transformers/pull/7968",
"diff_url": "https://github.com/huggingface/transformers/pull/7968.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7968.patch",
"merged_at": 1603360109000
} |
https://api.github.com/repos/huggingface/transformers/issues/7967 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7967/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7967/comments | https://api.github.com/repos/huggingface/transformers/issues/7967/events | https://github.com/huggingface/transformers/issues/7967 | 727,169,267 | MDU6SXNzdWU3MjcxNjkyNjc= | 7,967 | Should update version requirement for scipy in 'examples\\distillation\\requirements.txt'? | {
"login": "suliuzh",
"id": 27858725,
"node_id": "MDQ6VXNlcjI3ODU4NzI1",
"avatar_url": "https://avatars.githubusercontent.com/u/27858725?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/suliuzh",
"html_url": "https://github.com/suliuzh",
"followers_url": "https://api.github.com/users/suliuzh/followers",
"following_url": "https://api.github.com/users/suliuzh/following{/other_user}",
"gists_url": "https://api.github.com/users/suliuzh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/suliuzh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/suliuzh/subscriptions",
"organizations_url": "https://api.github.com/users/suliuzh/orgs",
"repos_url": "https://api.github.com/users/suliuzh/repos",
"events_url": "https://api.github.com/users/suliuzh/events{/privacy}",
"received_events_url": "https://api.github.com/users/suliuzh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@VictorSanh what do you think?"
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | There are inconsistent version requirements for scipy in 'examples\\distillation\\requirements.txt' and 'examples\\movement-pruning\\requirements.txt'. Fixed version **1.3.1** in 'examples\\distillation\\requirements.txt' is not in the version range in **'>=1.4.1'** in 'examples\\movement-pruning\\requirements.txt'.

**Solution**
I am wondering if it is necessary to update the version requirement in 'examples\\distillation\\requirements.txt' to be consistent. Fixed version can often cause conflict. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7967/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7967/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7966 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7966/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7966/comments | https://api.github.com/repos/huggingface/transformers/issues/7966/events | https://github.com/huggingface/transformers/issues/7966 | 727,090,997 | MDU6SXNzdWU3MjcwOTA5OTc= | 7,966 | T5 with allowing model changes | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @rabeehk - for more specific cases we recommend that you fork master and tweak the model however you would like to so that it fits your purpose. \r\nI didn't understand 100% what kind of script you are looking for, but here you can browse some of the T5 scripts, we have collected: https://github.com/huggingface/transformers/tree/master/notebooks#-transformers-notebooks",
"Hi\nthanks for getting back to me. I am looking for a script showing to train\nT5 with multiple tasks. this is when they create a T5 registery mixture\ndataset in their original code and train one model on a mixture of several\ndataset.\nDo you know if huggingface version works fine for handling multiple\ndatasets? and do you know how performance is different from JAX\nimplementation? is this more or less the same? can one train the base T5\nwith a mixture of datasets with huggingface code?\n\nthank you very much.\nBest\nRabeeh\n\nOn Fri, Oct 23, 2020, 8:20 AM Patrick von Platen <[email protected]>\nwrote:\n\n> Hey @rabeehk <https://github.com/rabeehk> - for more specific cases we\n> recommend that you fork master and tweak the model however you would like\n> to so that it fits your purpose.\n> I didn't understand 100% what kind of script you are looking for, but here\n> you can browse some of the T5 scripts, we have collected:\n> https://github.com/huggingface/transformers/tree/master/notebooks#-transformers-notebooks\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/7966#issuecomment-714945934>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABP4ZCEVU5IEVMEYHK4VLSTSMEOELANCNFSM4S2XT3OA>\n> .\n>\n"
] | 1,603 | 1,608 | 1,608 | NONE | null | Hi
I would like to be able to change the model of T5, in addition to training on multiple tasks, I think currently the script can work for summarization only. Do you know possible other scripts allowing training on multiple tasks?
One more question that in T5 in tensorflow repo, they have a small example for using your repo, does it work for large-scale? Is training on TPU working? could it allow model changes so one can change the model architecture?
thanks a lot | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7966/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7966/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7965 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7965/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7965/comments | https://api.github.com/repos/huggingface/transformers/issues/7965/events | https://github.com/huggingface/transformers/pull/7965 | 727,068,000 | MDExOlB1bGxSZXF1ZXN0NTA4MDMwOTk2 | 7,965 | [s2s trainer] tests to use distributed on multi-gpu machine | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@sshleifer, the slow test isn't working for me prior to this PR with 0 or 1 gpu:\r\n\r\n```\r\nCUDA_VISIBLE_DEVICES=\"\" RUN_SLOW=1 pytest -sv examples/seq2seq/test_finetune_trainer.py::TestFinetuneTrainer::test_finetune_trainer_slow \r\n[...]\r\n\r\nself = <seq2seq.test_finetune_trainer.TestFinetuneTrainer testMethod=test_finetune_trainer_slow>\r\n\r\n @slow\r\n def test_finetune_trainer_slow(self):\r\n # There is a missing call to __init__process_group somewhere\r\n> output_dir = self.run_trainer(eval_steps=2, max_len=\"128\", model_name=MARIAN_MODEL, num_train_epochs=3)\r\n\r\nexamples/seq2seq/test_finetune_trainer.py:35: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\nexamples/seq2seq/test_finetune_trainer.py:113: in run_trainer\r\n main()\r\nexamples/seq2seq/finetune_trainer.py:199: in main\r\n model = AutoModelForSeq2SeqLM.from_pretrained(\r\nsrc/transformers/modeling_auto.py:1118: in from_pretrained\r\n return MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING[type(config)].from_pretrained(\r\nsrc/transformers/modeling_utils.py:947: in from_pretrained\r\n model = cls(config, *model_args, **model_kwargs)\r\nsrc/transformers/modeling_bart.py:964: in __init__\r\n base_model = BartModel(config)\r\nsrc/transformers/modeling_bart.py:843: in __init__\r\n self.encoder = BartEncoder(config, self.shared)\r\nsrc/transformers/modeling_bart.py:315: in __init__\r\n self.embed_positions = SinusoidalPositionalEmbedding(\r\nsrc/transformers/modeling_bart.py:1331: in __init__\r\n self.weight = self._init_weight(self.weight)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n\r\nout = Parameter containing:\r\ntensor([[ 2.2244e+00, -1.2380e+00, -3.5307e-01, ..., -1.0924e+00,\r\n -1.3130e+00, 1.7737... [-3.8906e-01, 9.2203e-01, 1.7887e-01, ..., -1.7493e-01,\r\n -1.6993e+00, 2.0896e-01]], requires_grad=True)\r\n\r\n @staticmethod\r\n def _init_weight(out: nn.Parameter):\r\n \"\"\"Identical to the XLM create_sinusoidal_embeddings except features are not interleaved.\r\n The cos features are in the 2nd half of the vector. [dim // 2:]\r\n \"\"\"\r\n n_pos, dim = out.shape\r\n position_enc = np.array(\r\n [[pos / np.power(10000, 2 * (j // 2) / dim) for j in range(dim)] for pos in range(n_pos)]\r\n )\r\n> out[:, 0 : dim // 2] = torch.FloatTensor(np.sin(position_enc[:, 0::2])) # This line breaks for odd n_pos\r\nE RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation\r\n```\r\n**edit**: After messing around with my conda env due to ever-breaking tf, this went away, but a new thing came instead: https://github.com/huggingface/transformers/issues/7982\r\n",
"interesting, I can't reproduce that. What's your `transformers-cli env`? Does it fail after the change?",
"No, it fails on master.\r\n\r\n```\r\n- `transformers` version: 3.4.0\r\n- Platform: Linux-4.15.0-118-generic-x86_64-with-glibc2.10\r\n- Python version: 3.8.5\r\n- PyTorch version (GPU?): 1.8.0.dev20201020 (True)\r\n- Tensorflow version (GPU?): 2.3.1 (True)\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n```",
"I'd open an issue \"sinusoidal positional embedding broken on torch 1.8\".\r\nReasoing: [these](https://github.com/huggingface/transformers/blob/master/tests/test_modeling_bart.py#L606) fast tests pass CI and I can't replicate on on torch 1.5.\r\n\r\nIs this ready to merge otherwise?",
"Need to sort out this first: https://github.com/huggingface/transformers/issues/7982\r\n\r\nIt's mostly ready otherwise, but the slow test will fail as it has nothing to do with this PR. \r\n\r\n**edit** resolved in this PR.",
"Out of curiosity, how did you resolve the bleu issue?",
"Great work, btw! This is awesome. Now we can tell people to run these tests before they break things :) \r\nApparently there is multi-gpu ci running src/ tests at some frequency FYI, I think through gh actions.",
"I replied in the other issue: I used more iterations - 6 was enough for 1 gpu, 10 for 2, so I went with 10.\r\n\r\nI think if someone tries it on more than 2 gpus it might need even more iterations - could probably codify this with a factor of n_gpus.\r\n",
"Documenting it now https://github.com/huggingface/transformers/pull/7993\r\nIf you think anything needs to be added please let me know.\r\nIt will get better over time.",
"> Apparently there is multi-gpu ci running src/ tests at some frequency FYI, I think through gh actions.\r\n\r\nOnce a day yes:\r\nhttps://github.com/huggingface/transformers/blob/master/.github/workflows/self-scheduled.yml#L74",
"Later I want to add the non-interactive IO pipe options - in case it hangs for someone - by default it could be non-interactive - always works, and only make it interactive for debug purposes."
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | This PR:
* [x] abstracts the async forking io into local `utils.py`
* [x] deploys distributed training with special async io forking for `examples/seq2seq/test_finetune_trainer.py`
So now this works (2 gpus):
```
CUDA_VISIBLE_DEVICES="0,1" RUN_SLOW=1 pytest -sv examples/seq2seq/test_finetune_trainer.py
```
and this still works (1 gpu)
```
CUDA_VISIBLE_DEVICES="0" RUN_SLOW=1 pytest -sv examples/seq2seq/test_finetune_trainer.py
```
Fixes: #7833
Fixes: #7982
@sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7965/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7965/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7965",
"html_url": "https://github.com/huggingface/transformers/pull/7965",
"diff_url": "https://github.com/huggingface/transformers/pull/7965.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7965.patch",
"merged_at": 1603401982000
} |
https://api.github.com/repos/huggingface/transformers/issues/7964 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7964/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7964/comments | https://api.github.com/repos/huggingface/transformers/issues/7964/events | https://github.com/huggingface/transformers/pull/7964 | 727,058,304 | MDExOlB1bGxSZXF1ZXN0NTA4MDIyNjk0 | 7,964 | adding beginner-friendly notebook on text classification with DistilBERT/TF | {
"login": "peterbayerle",
"id": 33770187,
"node_id": "MDQ6VXNlcjMzNzcwMTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/33770187?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/peterbayerle",
"html_url": "https://github.com/peterbayerle",
"followers_url": "https://api.github.com/users/peterbayerle/followers",
"following_url": "https://api.github.com/users/peterbayerle/following{/other_user}",
"gists_url": "https://api.github.com/users/peterbayerle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/peterbayerle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/peterbayerle/subscriptions",
"organizations_url": "https://api.github.com/users/peterbayerle/orgs",
"repos_url": "https://api.github.com/users/peterbayerle/repos",
"events_url": "https://api.github.com/users/peterbayerle/events{/privacy}",
"received_events_url": "https://api.github.com/users/peterbayerle/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | # What does this PR do?
Looking at the current community notebooks, it seems that few are targeted for absolute beginners and even fewer are written with TensorFlow. This notebook describes absolutely everything a beginner would need to know, including how to save/load their model and use it for new predictions (this is often omitted in tutorials)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7964/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/7964/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7964",
"html_url": "https://github.com/huggingface/transformers/pull/7964",
"diff_url": "https://github.com/huggingface/transformers/pull/7964.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7964.patch",
"merged_at": 1603373450000
} |
https://api.github.com/repos/huggingface/transformers/issues/7963 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7963/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7963/comments | https://api.github.com/repos/huggingface/transformers/issues/7963/events | https://github.com/huggingface/transformers/issues/7963 | 727,002,341 | MDU6SXNzdWU3MjcwMDIzNDE= | 7,963 | Load tuned model without downloading from huggingface | {
"login": "sachinruk",
"id": 1410927,
"node_id": "MDQ6VXNlcjE0MTA5Mjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1410927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sachinruk",
"html_url": "https://github.com/sachinruk",
"followers_url": "https://api.github.com/users/sachinruk/followers",
"following_url": "https://api.github.com/users/sachinruk/following{/other_user}",
"gists_url": "https://api.github.com/users/sachinruk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sachinruk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sachinruk/subscriptions",
"organizations_url": "https://api.github.com/users/sachinruk/orgs",
"repos_url": "https://api.github.com/users/sachinruk/repos",
"events_url": "https://api.github.com/users/sachinruk/events{/privacy}",
"received_events_url": "https://api.github.com/users/sachinruk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
" Instead of doing `self.base = DistilBertModel(DistilBertConfig())` you can do `self.base = DistilBertModel(DistilBertConfig(vocab_size=119547))`.\r\n"
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | I have the following model which I have tuned for a classification task.
```python
BASE_MODEL = "distilbert-base-multilingual-cased"
class Model(nn.Module):
def __init__(self, nc, p=0.1):
super().__init__()
self.base = AutoModel.from_pretrained(BASE_MODEL)
in_features = 768 # self.base.pooler.dense.out_features
self.dropout = nn.Dropout(p=p)
self.fc = nn.Linear(in_features, nc, bias=False)
def forward(self, x):
out = self.base(**x)[0]
out = out[:, 0, :]
out = self.dropout(out)
return self.fc(out)
```
However, the way that I load the model currently is by doing:
```python
model = Model(nc)
model.load_state_dict(torch.load(TUNED_MODEL_PATH))
```
The first line above causes `distibert` to be downloaded again and then my weights overwrite the model. I was hoping that there is a way of just getting the base architecture without downloading any weights.
I tried doing `self.base = DistilBertModel(DistilBertConfig())`. However, when loading the tuned model, it gives the error `size mismatch for base.embeddings.word_embeddings.weight: copying a param with shape torch.Size([119547, 768]) from checkpoint, the shape in current model is torch.Size([30522, 768]).`. I believe this is due to the fact that I am using **a multilingual** model.
Side questions:
- Where does the AutoTokenizer/ AutoModel download the relevant files to?
- Also apologies for posting here instead of the forum. For some reason it won't let me login with the huggingface credentials.
Version: transformers==3.1.0
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7963/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7963/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7962 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7962/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7962/comments | https://api.github.com/repos/huggingface/transformers/issues/7962/events | https://github.com/huggingface/transformers/issues/7962 | 726,988,544 | MDU6SXNzdWU3MjY5ODg1NDQ= | 7,962 | xla_spawn and run_language_modeling slow on TPUs | {
"login": "ksjae",
"id": 17930170,
"node_id": "MDQ6VXNlcjE3OTMwMTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/17930170?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ksjae",
"html_url": "https://github.com/ksjae",
"followers_url": "https://api.github.com/users/ksjae/followers",
"following_url": "https://api.github.com/users/ksjae/following{/other_user}",
"gists_url": "https://api.github.com/users/ksjae/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ksjae/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ksjae/subscriptions",
"organizations_url": "https://api.github.com/users/ksjae/orgs",
"repos_url": "https://api.github.com/users/ksjae/repos",
"events_url": "https://api.github.com/users/ksjae/events{/privacy}",
"received_events_url": "https://api.github.com/users/ksjae/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The only difference I'm seeing between our script and yours is the data loading, here using `datasets`. Are we aware of TPU slowdown when using `datasets` @lhoestq, @thomwolf?",
"I haven't tested `datasets` with TPU yet. I know @sgugger tried once, did you notice slowdowns ?",
"Not for the new run_glue script introduced in #7917",
"Is there any metrics/debug information I can provide?",
"I have similar issues:\r\n* mxu utilization mostly 0%\r\n* around 150 s/it\r\n\r\nsetup:\r\n* n1-highmem-16\r\n* TPU v2-8\r\n\r\nI tried it both with a map-style dataset using `datasets`' wiki dump and an iterable-style dataset with the `Trainer` adapted. Same result. The slowdown is not on behalf of the data loading but of the forward-pass, loss computation and backpropagation on the tpu.",
"Are you sure all your batches of inputs have the exact same shape? I tested thoroughly the script on TPUs with the datasets library and:\r\n- when inputs are not all of the same size, the training is excruciatingly slow (which is expected because XLA recompiles the code at each training step in this case)\r\n- when inputs are all of the same size, it runs smoothly and fast.\r\n\r\nThis is independent of using the datasets library or not, and this is expected behavior on TPUs, as XLA does not handle well dynamic shapes. ",
"How can I check whether inputs are the same size? Or how can I pad the inputs so it has fixed size?",
"Your dataset is hidden inside the `load_dataset` function, so I can't advise you on how to add padding. There are examples of this in the new `run_mlm.py` script.\r\nAs for checking your inputs are all of the same size, it's just a pass through your dataset:\r\n```\r\nshapes = []\r\nfor x in dataset:\r\n shapes.append(x[\"input_ids\"].shape)\r\nprint(set(shapes))\r\n``` ",
"Keeping shapes constant fixed the speed issue for me. After few iteration (~ 5), the graph stabilized and the iteration speed went down from 150s/it to few seconds per batch.\r\n\r\nFurther readings:\r\n* https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md#known-performance-caveats\r\n* https://github.com/pytorch/xla/issues/2383\r\n* https://github.com/pytorch/xla/issues/2368",
"Fixed it, closing."
] | 1,603 | 1,604 | 1,604 | NONE | null | ## Environment info
- `transformers` version: 3.4.0
- Platform: Linux-4.9.0-13-amd64-x86_64-with-debian-9.13
- Python version: 3.6.10
- PyTorch version (GPU?): 1.8.0a0+e5ed037 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No (but using TPU)
- Using distributed or parallel set-up in script?: using xla_spawn.py
### Who can help
@LysandreJik @sgugger
or the writer of examples/language-modeling/run_language_modeling.py or a TPU master
## Information
Model I am using (Bert, XLNet ...): GPT2
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: examples/language-modeling/run_language_modeling.py but with HF datasets
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: Text generation
## To reproduce
Steps to reproduce the behavior:
1. Modify examples/language-modeling/run_language_modeling.py to below
```
import logging
import math
import os
import glob
import datasets
from dataclasses import dataclass, field
from typing import Optional
from datasets import list_datasets, load_dataset
from transformers import (
CONFIG_MAPPING,
MODEL_WITH_LM_HEAD_MAPPING,
AutoConfig,
AutoModelWithLMHead,
AutoTokenizer,
DataCollatorForLanguageModeling,
DataCollatorForPermutationLanguageModeling,
HfArgumentParser,
LineByLineTextDataset,
PreTrainedTokenizer,
TextDataset,
Trainer,
TrainingArguments,
set_seed,
)
logger = logging.getLogger(__name__)
MODEL_CONFIG_CLASSES = list(MODEL_WITH_LM_HEAD_MAPPING.keys())
MODEL_TYPES = tuple(conf.model_type for conf in MODEL_CONFIG_CLASSES)
@dataclass
class ModelArguments:
"""
Arguments pertaining to which model/config/tokenizer we are going to fine-tune, or train from scratch.
"""
model_name_or_path: Optional[str] = field(
default=None,
metadata={
"help": "The model checkpoint for weights initialization. Leave None if you want to train a model from scratch."
},
)
model_type: Optional[str] = field(
default=None,
metadata={"help": "If training from scratch, pass a model type from the list: " + ", ".join(MODEL_TYPES)},
)
config_name: Optional[str] = field(
default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"}
)
tokenizer_name: Optional[str] = field(
default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"}
)
cache_dir: Optional[str] = field(
default=None, metadata={"help": "Where do you want to store the pretrained models downloaded from s3"}
)
@dataclass
class DataTrainingArguments:
"""
Arguments pertaining to what data we are going to input our model for training and eval.
"""
train_data_file: Optional[str] = field(
default=None, metadata={"help": "The input training data file (a text file)."}
)
eval_data_file: Optional[str] = field(
default=None,
metadata={"help": "An optional input evaluation data file to evaluate the perplexity on (a text file)."},
)
line_by_line: bool = field(
default=False,
metadata={"help": "Whether distinct lines of text in the dataset are to be handled as distinct sequences."},
)
mlm: bool = field(
default=False, metadata={"help": "Train with masked-language modeling loss instead of language modeling."}
)
mlm_probability: float = field(
default=0.15, metadata={"help": "Ratio of tokens to mask for masked language modeling loss"}
)
plm_probability: float = field(
default=1 / 6,
metadata={
"help": "Ratio of length of a span of masked tokens to surrounding context length for permutation language modeling."
},
)
max_span_length: int = field(
default=5, metadata={"help": "Maximum length of a span of masked tokens for permutation language modeling."}
)
block_size: int = field(
default=-1,
metadata={
"help": "Optional input sequence length after tokenization."
"The training dataset will be truncated in block of this size for training."
"Default to the model max input length for single sentence inputs (take into account special tokens)."
},
)
overwrite_cache: bool = field(
default=False, metadata={"help": "Overwrite the cached training and evaluation sets"}
)
arrow: bool = field(
default=True,
metadata={
"help": "Use Arrow-based HF NLP for optimization."
},
)
def get_dataset(
args: DataTrainingArguments,
tokenizer: PreTrainedTokenizer,
evaluate: bool = False,
cache_dir: Optional[str] = "./cache",
):
tokenizer.pad_token = "<|endoftext|>"
tokenizer._pad_token = "<|endoftext|>"
#tokenizer.pad_token_id = 50256
file_path = args.eval_data_file if evaluate else args.train_data_file
if True:
dataset = datasets.load_from_disk(file_path)
dataset.set_format(type='torch', columns=['input_ids'])
return dataset
if False:
dataset = load_dataset("text", data_files=[file_path], split='train')
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True,
truncation=True, max_length=args.block_size), batched=True)
dataset.set_format(type='torch', columns=['input_ids'])
dataset.save_to_disk(file_path+'.arrow')
return dataset
if args.line_by_line:
return LineByLineTextDataset(tokenizer=tokenizer, file_path=file_path, block_size=args.block_size)
else:
return TextDataset(
tokenizer=tokenizer,
file_path=file_path,
block_size=args.block_size,
overwrite_cache=args.overwrite_cache,
cache_dir=cache_dir,
)
"""
dataset = load_dataset("text", data_files=file_path, split="train")
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True,
truncation=True, max_length=args.block_size), batched=True)
dataset.set_format(type='torch', columns=['input_ids'])
return dataset
"""
def main():
# See all possible arguments in src/transformers/training_args.py
# or by passing the --help flag to this script.
# We now keep distinct sets of args, for a cleaner separation of concerns.
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
if data_args.eval_data_file is None and training_args.do_eval:
raise ValueError(
"Cannot do evaluation without an evaluation data file. Either supply a file to --eval_data_file "
"or remove the --do_eval argument."
)
if (
os.path.exists(training_args.output_dir)
and os.listdir(training_args.output_dir)
and training_args.do_train
and not training_args.overwrite_output_dir
):
raise ValueError(
f"Output directory ({training_args.output_dir}) already exists and is not empty. Use --overwrite_output_dir to overcome."
)
# Setup logging
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
level=logging.INFO if training_args.local_rank in [-1, 0] else logging.WARN,
)
logger.warning(
"Process rank: %s, device: %s, n_gpu: %s, distributed training: %s, 16-bits training: %s",
training_args.local_rank,
training_args.device,
training_args.n_gpu,
bool(training_args.local_rank != -1),
training_args.fp16,
)
logger.info("Training/evaluation parameters %s", training_args)
# Set seed
set_seed(training_args.seed)
# Load pretrained model and tokenizer
#
# Distributed training:
# The .from_pretrained methods guarantee that only one local process can concurrently
# download model & vocab.
if model_args.config_name:
config = AutoConfig.from_pretrained(model_args.config_name, cache_dir=model_args.cache_dir)
elif model_args.model_name_or_path:
config = AutoConfig.from_pretrained(model_args.model_name_or_path, cache_dir=model_args.cache_dir)
else:
config = CONFIG_MAPPING[model_args.model_type]()
logger.warning("You are instantiating a new config instance from scratch.")
if model_args.tokenizer_name:
tokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name, cache_dir=model_args.cache_dir)
elif model_args.model_name_or_path:
tokenizer = AutoTokenizer.from_pretrained(model_args.model_name_or_path, cache_dir=model_args.cache_dir)
else:
raise ValueError(
"You are instantiating a new tokenizer from scratch. This is not supported, but you can do it from another script, save it,"
"and load it from here, using --tokenizer_name"
)
tokenizer.pad_token = "<|endoftext|>"
tokenizer._pad_token = "<|endoftext|>"
if model_args.model_name_or_path:
model = AutoModelWithLMHead.from_pretrained(
model_args.model_name_or_path,
from_tf=bool(".ckpt" in model_args.model_name_or_path),
config=config,
cache_dir=model_args.cache_dir,
)
else:
logger.info("Training new model from scratch")
model = AutoModelWithLMHead.from_config(config)
model.resize_token_embeddings(len(tokenizer))
if config.model_type in ["bert", "roberta", "distilbert", "camembert"] and not data_args.mlm:
raise ValueError(
"BERT and RoBERTa-like models do not have LM heads but masked LM heads. They must be run using the"
"--mlm flag (masked language modeling)."
)
if data_args.block_size <= 0:
data_args.block_size = tokenizer.max_len
# Our input block size will be the max possible for the model
else:
data_args.block_size = min(data_args.block_size, tokenizer.max_len)
# Get datasets
train_dataset = (
get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_train else None
)
eval_dataset = (
get_dataset(data_args, tokenizer=tokenizer, evaluate=True, cache_dir=model_args.cache_dir)
if training_args.do_eval
else None
)
if config.model_type == "xlnet":
data_collator = DataCollatorForPermutationLanguageModeling(
tokenizer=tokenizer,
plm_probability=data_args.plm_probability,
max_span_length=data_args.max_span_length,
)
else:
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer, mlm=data_args.mlm, mlm_probability=data_args.mlm_probability
)
# Initialize our Trainer
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
prediction_loss_only=True,
)
# Training
if training_args.do_train:
model_path = (
model_args.model_name_or_path
if model_args.model_name_or_path is not None and os.path.isdir(model_args.model_name_or_path)
else None
)
trainer.train(model_path=model_path)
trainer.save_model()
# For convenience, we also re-save the tokenizer to the same directory,
# so that you can share your model easily on huggingface.co/models =)
if trainer.is_world_master():
tokenizer.save_pretrained(training_args.output_dir)
# Evaluation
results = {}
if training_args.do_eval:
logger.info("*** Evaluate ***")
eval_output = trainer.evaluate()
perplexity = math.exp(eval_output["eval_loss"])
result = {"perplexity": perplexity}
output_eval_file = os.path.join(training_args.output_dir, "eval_results_lm.txt")
if trainer.is_world_master():
with open(output_eval_file, "w") as writer:
logger.info("***** Eval results *****")
for key in sorted(result.keys()):
logger.info(" %s = %s", key, str(result[key]))
writer.write("%s = %s\n" % (key, str(result[key])))
results.update(result)
return results
def _mp_fn(index):
# For xla_spawn (TPUs)
main()
if __name__ == "__main__":
main()
```
2. set torch-xla-nightly Conda & set env
3. run script (replace dataset, since I cannot upload 48 GB worth of arrow files)
```
XLA_USE_BF16=1 python3 examples/xla_spawn.py --num_cores 8 examples/language-modeling/train.py --output_dir=kogpt1 --model_type=gpt2 --do_train --train_data_file=/home/ksjcom0705_gmail_com/NEWS_ARROW --overwrite_output_dir --per_device_train_batch_size=6 --save_steps 10000 --num_train_epochs=1 --block_size 2048 --eval_steps 10000 --logging_steps=10000 --tokenizer_name /home/ksjcom0705_gmail_com/kotok tpu_num_cores=8
```
The progress bar will show something like 250s/it, while on GPUs(2 V100s) it's about 1.2 it/s
## Expected behavior
At least give speed similar to GPUs, since my [home-brew code](https://github.com/ksjae/KoGPT2-train) works with similar speed.
Also, MXU utilization is stuck at near-zero.
<img width="2177" alt="image" src="https://user-images.githubusercontent.com/17930170/96816530-663ec780-1458-11eb-9595-be91de708d03.png">
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7962/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7962/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7961 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7961/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7961/comments | https://api.github.com/repos/huggingface/transformers/issues/7961/events | https://github.com/huggingface/transformers/issues/7961 | 726,986,169 | MDU6SXNzdWU3MjY5ODYxNjk= | 7,961 | A question about shift_tokens_right in BART model | {
"login": "liuslnlp",
"id": 17002231,
"node_id": "MDQ6VXNlcjE3MDAyMjMx",
"avatar_url": "https://avatars.githubusercontent.com/u/17002231?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liuslnlp",
"html_url": "https://github.com/liuslnlp",
"followers_url": "https://api.github.com/users/liuslnlp/followers",
"following_url": "https://api.github.com/users/liuslnlp/following{/other_user}",
"gists_url": "https://api.github.com/users/liuslnlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liuslnlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liuslnlp/subscriptions",
"organizations_url": "https://api.github.com/users/liuslnlp/orgs",
"repos_url": "https://api.github.com/users/liuslnlp/repos",
"events_url": "https://api.github.com/users/liuslnlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/liuslnlp/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It's correct, copied from fairseq (authors code).\r\nyou can think of `shift_tokens_right` as `shift_tokens_right_and_wrap_eos_to_position0`.\r\nIf you have empirical evidence that there is a change that improves fine-tuning, I'd be happy to incorporate it.\r\n",
"After I updated my version for this one, I'm also confused by the decoder input. \r\n\r\nWith this new encoding, after finetuning, bart-large is outputting:\r\n\r\nExample 1:\r\n\r\n```\r\n\"labels\": \"<s> There are too many traitors among our compatriots,</s><pad><pad><pad>\",\r\n \"decoder_input_ids\": \"</s><s> There are too many traitors among our compatriots,</s><pad><pad>\",\r\n \"generated_ids\": \"</s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s></s>\"\r\n```\r\n\r\nExample 2:\r\n\r\n```\r\n\"labels\": \"<s> Freedom of speech is relative. We need take national conditions into account.</s>\",\r\n \"decoder_input_ids\": \"</s><s> Freedom of speech is relative. We need take national conditions into account.\",\r\n \"generated_ids\": \"</s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s></s>\"\r\n```\r\n\r\n\r\n\r\nI'm trying to figure out why.\r\n\r\n I'm using the code provided by the examples/seq2seq folder. T5 works fine, but bart does not.",
"@leoribeiro if you are reporting a bug, could you explain what you did and what you expected more clearly in a separate issue?\r\nOtherwise, I don't understand why you say the encoding is new. `shift_tokens_right` hasn't changed.",
"@sshleifer thank you for your reply. What I mean by new encoding is adding `</s>` at the beginning of the decoder inputs. I think that in a previous transformer version (2.11.0), the code for BART did not use `</s>` at the beginning of the decoder inputs, correct? In the 2.11.0 version, my experiments with `facebook/bart-large` were working with the following code:\r\n\r\n```\r\n def _step(self, batch):\r\n pad_token_id = self.tokenizer.pad_token_id\r\n source_ids, source_mask, y = batch[\"source_ids\"], batch[\"source_mask\"], batch[\"target_ids\"]\r\n y_ids = y[:, :-1].contiguous()\r\n lm_labels = y[:, 1:].clone()\r\n lm_labels[y[:, 1:] == pad_token_id] = -100\r\n outputs = self(source_ids, attention_mask=source_mask, decoder_input_ids=y_ids, lm_labels=lm_labels,)\r\n\r\n loss = outputs[0]\r\n\r\n return loss\r\n```\r\n\r\nBut now, with the following code, my experiments with `facebook/bart-large` are not working:\r\n```\r\n def _step(self, batch: dict) -> Tuple:\r\n pad_token_id = self.tokenizer.pad_token_id\r\n src_ids, src_mask = batch[\"input_ids\"], batch[\"attention_mask\"]\r\n tgt_ids = batch[\"labels\"]\r\n if isinstance(self.model, T5ForConditionalGeneration):\r\n decoder_input_ids = self.model._shift_right(tgt_ids)\r\n else:\r\n decoder_input_ids = shift_tokens_right(tgt_ids, pad_token_id)\r\n if not self.already_saved_batch: # This would be slightly better if it only happened on rank zero\r\n batch[\"decoder_input_ids\"] = decoder_input_ids\r\n self.save_readable_batch(batch)\r\n outputs = self(src_ids, attention_mask=src_mask, decoder_input_ids=decoder_input_ids, use_cache=False)\r\n lm_logits = outputs[0]\r\n if self.hparams.label_smoothing == 0:\r\n # Same behavior as modeling_bart.py, besides ignoring pad_token_id\r\n ce_loss_fct = torch.nn.CrossEntropyLoss(ignore_index=pad_token_id)\r\n\r\n assert lm_logits.shape[-1] == self.vocab_size\r\n loss = ce_loss_fct(lm_logits.view(-1, lm_logits.shape[-1]), tgt_ids.view(-1))\r\n else:\r\n lprobs = torch.nn.functional.log_softmax(lm_logits, dim=-1)\r\n loss, nll_loss = label_smoothed_nll_loss(\r\n lprobs, tgt_ids, self.hparams.label_smoothing, ignore_index=pad_token_id\r\n )\r\n return (loss,)\r\n```\r\n\r\nI'm trying to understand if those things are related. The weird thing is that the exact same code works for `facebook/bart-base`. Please, see #8005. ",
"Moved to #8005, problem seems to be config related."
] | 1,603 | 1,603 | 1,603 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.10
- Platform: Ubuntu 16.04
- Python version: 3.7.3
- PyTorch version (GPU?): 1.6 GPU-version
### Who can help
@TevenLeScao @sshleifer
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Generally speaking, the decoder input of seq2seq model is `<sos> tok1 tok2 … tokn` and the target is `tok1 tok2 … tokn <eos>` (shift right). But in BART, the `shift_tokens_right` function will produce the following result:
decoder input: `<eos><sos> tok1 tok2 … tokn`
target:`<sos> tok1 tok2 … tokn <eos>`
Is this a bug or correct?
## To reproduce
```python
tokenizer = BartTokenizer.from_pretrained(model_path)
model = BartForConditionalGeneration.from_pretrained(model_path)
inputs = tokenizer.prepare_seq2seq_batch(
src_texts=['good morning.', ],
tgt_texts=['good bye.'],
max_length=100, return_tensors='pt'
)
# This function is copied from modeling_bart.py
def shift_tokens_right(input_ids, pad_token_id):
"""Shift input ids one token to the right, and wrap the last non pad token (usually <eos>)."""
prev_output_tokens = input_ids.clone()
index_of_eos = (input_ids.ne(pad_token_id).sum(dim=1) - 1).unsqueeze(-1)
prev_output_tokens[:, 0] = input_ids.gather(1, index_of_eos).squeeze()
prev_output_tokens[:, 1:] = input_ids[:, :-1]
return prev_output_tokens
tgt = inputs['labels'][0].tolist()
decoder_inputs = shift_tokens_right(inputs['labels'], tokenizer.pad_token_id)[0].tolist()
print("decoder inputs:", tokenizer.decode(decoder_inputs))
print("target:", tokenizer.decode(tgt))
```
The expected output is
```
decoder inputs: <s>good bye.
target: good bye.</s>
```
but it actually outputs
```
decoder inputs: </s><s>good bye.
target: <s>good bye.</s>
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7961/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7961/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7960 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7960/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7960/comments | https://api.github.com/repos/huggingface/transformers/issues/7960/events | https://github.com/huggingface/transformers/pull/7960 | 726,962,855 | MDExOlB1bGxSZXF1ZXN0NTA3OTQ2OTEx | 7,960 | RoBERTa convert-script modified to support mapping of bpe tokens | {
"login": "vesteinn",
"id": 353884,
"node_id": "MDQ6VXNlcjM1Mzg4NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/353884?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vesteinn",
"html_url": "https://github.com/vesteinn",
"followers_url": "https://api.github.com/users/vesteinn/followers",
"following_url": "https://api.github.com/users/vesteinn/following{/other_user}",
"gists_url": "https://api.github.com/users/vesteinn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vesteinn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vesteinn/subscriptions",
"organizations_url": "https://api.github.com/users/vesteinn/orgs",
"repos_url": "https://api.github.com/users/vesteinn/repos",
"events_url": "https://api.github.com/users/vesteinn/events{/privacy}",
"received_events_url": "https://api.github.com/users/vesteinn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,603 | 1,619 | 1,619 | NONE | null | # Reordering of embeddings possible when converting roberta with dict file
When RoBERTa is trained using fairseq the preprocessing pipeline bpe encodes tokens and stores a mapping of token ids to bpe-token ids in a dict.txt file. This is mostly just a re-ordering of the tokens (potentially with a few missing ones if they were not found in the training data). The earlier version of the conversion script maps the fairseq model directly creating a need for downstream processing to recover the original tokens, this removes that need by allowing a shuffling of the embedding tensors using the dict.txt file. This enables e.g. the "fill-mask" pipeline out of the box on a converted model.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@julien-c (since prominent in blame)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7960/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7960/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7960",
"html_url": "https://github.com/huggingface/transformers/pull/7960",
"diff_url": "https://github.com/huggingface/transformers/pull/7960.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7960.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/7959 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7959/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7959/comments | https://api.github.com/repos/huggingface/transformers/issues/7959/events | https://github.com/huggingface/transformers/issues/7959 | 726,958,493 | MDU6SXNzdWU3MjY5NTg0OTM= | 7,959 | T5-large on multiple gpus. | {
"login": "Palipoor",
"id": 16380397,
"node_id": "MDQ6VXNlcjE2MzgwMzk3",
"avatar_url": "https://avatars.githubusercontent.com/u/16380397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Palipoor",
"html_url": "https://github.com/Palipoor",
"followers_url": "https://api.github.com/users/Palipoor/followers",
"following_url": "https://api.github.com/users/Palipoor/following{/other_user}",
"gists_url": "https://api.github.com/users/Palipoor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Palipoor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Palipoor/subscriptions",
"organizations_url": "https://api.github.com/users/Palipoor/orgs",
"repos_url": "https://api.github.com/users/Palipoor/repos",
"events_url": "https://api.github.com/users/Palipoor/events{/privacy}",
"received_events_url": "https://api.github.com/users/Palipoor/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"For OOM errors, I guess you need to reduce batch_size or parallelize over more GPUs.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,603 | 1,609 | 1,609 | NONE | null | Hi! I'm trying to fine-tune a T5-large on multiple gpus, so I basically use `torch.nn.DataParallel`, but when I get the output of the model which contains the loss and I do `loss.mean().backward()` I run into `cuda out of memory` as I think the loss is just on the first gpu and that's taken. What should I do? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7959/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7959/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7958 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7958/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7958/comments | https://api.github.com/repos/huggingface/transformers/issues/7958/events | https://github.com/huggingface/transformers/pull/7958 | 726,954,715 | MDExOlB1bGxSZXF1ZXN0NTA3OTQwMjMx | 7,958 | added qg evaluation notebook | {
"login": "zolekode",
"id": 25635679,
"node_id": "MDQ6VXNlcjI1NjM1Njc5",
"avatar_url": "https://avatars.githubusercontent.com/u/25635679?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zolekode",
"html_url": "https://github.com/zolekode",
"followers_url": "https://api.github.com/users/zolekode/followers",
"following_url": "https://api.github.com/users/zolekode/following{/other_user}",
"gists_url": "https://api.github.com/users/zolekode/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zolekode/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zolekode/subscriptions",
"organizations_url": "https://api.github.com/users/zolekode/orgs",
"repos_url": "https://api.github.com/users/zolekode/repos",
"events_url": "https://api.github.com/users/zolekode/events{/privacy}",
"received_events_url": "https://api.github.com/users/zolekode/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks Patrick :)\r\n\r\nThank you for sharing this @zolekode !",
"@patrickvonplaten thanks for the correction"
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | @patrickvonplaten, @TevenLeScao
I added a notebook to evaluate question generation models. Could you please take a look ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7958/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7958/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7958",
"html_url": "https://github.com/huggingface/transformers/pull/7958",
"diff_url": "https://github.com/huggingface/transformers/pull/7958.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7958.patch",
"merged_at": 1603357332000
} |
https://api.github.com/repos/huggingface/transformers/issues/7957 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7957/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7957/comments | https://api.github.com/repos/huggingface/transformers/issues/7957/events | https://github.com/huggingface/transformers/issues/7957 | 726,924,716 | MDU6SXNzdWU3MjY5MjQ3MTY= | 7,957 | dropping "," in date because of Tokenization | {
"login": "vkaul11",
"id": 4062891,
"node_id": "MDQ6VXNlcjQwNjI4OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/4062891?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vkaul11",
"html_url": "https://github.com/vkaul11",
"followers_url": "https://api.github.com/users/vkaul11/followers",
"following_url": "https://api.github.com/users/vkaul11/following{/other_user}",
"gists_url": "https://api.github.com/users/vkaul11/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vkaul11/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vkaul11/subscriptions",
"organizations_url": "https://api.github.com/users/vkaul11/orgs",
"repos_url": "https://api.github.com/users/vkaul11/repos",
"events_url": "https://api.github.com/users/vkaul11/events{/privacy}",
"received_events_url": "https://api.github.com/users/vkaul11/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi, I'm sorry but I really don't understand what the issue is. Could you clarify?",
"I think this comes from the original Albert tokenization, maybe you want to ask on google repository about this?\r\nhttps://github.com/google-research/albert/blob/master/tokenization.py#L67",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,603 | 1,609 | 1,609 | NONE | null | I had my model where "May 7, 2020" is split like this
`['▁may', b'\xe2\x96\x818', ',', '▁2017']`
I saw that this is a problem here. Any reason why this code was thrown in for bytes string insertion where we have a comma in the token of len > 1?
```
for piece in pieces:
if len(piece) > 1 and piece[-1] == str(",") and piece[-2].isdigit():
cur_pieces = self.sp_model.EncodeAsPieces(piece[:-1].replace(SPIECE_UNDERLINE, ""))
if piece[0] != SPIECE_UNDERLINE and cur_pieces[0][0] == SPIECE_UNDERLINE:
if len(cur_pieces[0]) == 1:
cur_pieces = cur_pieces[1:]
else:
cur_pieces[0] = cur_pieces[0][1:]
cur_pieces.append(piece[-1])
new_pieces.extend(cur_pieces)
else:
new_pieces.append(piece)
return new_pieces
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7957/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7957/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7956 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7956/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7956/comments | https://api.github.com/repos/huggingface/transformers/issues/7956/events | https://github.com/huggingface/transformers/issues/7956 | 726,904,077 | MDU6SXNzdWU3MjY5MDQwNzc= | 7,956 | T5 on multiple datasets | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"pinging the T5 master @patrickvonplaten ",
"Hey @rabeehk - not sure about that. You can check the T5 notebooks we provide or `https://discuss.huggingface.co/`.",
"Hi\nI could not find notebooks in the link you said.\nI am looking for a way to train multiple tasks at once in T5, similar to\nthis script:\nhttps://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/models/hf_model.py\nthanks for your help.\nBest\nRabeeh\n\nOn Thu, Oct 22, 2020 at 10:51 PM Patrick von Platen <\[email protected]> wrote:\n\n> Hey @rabeehk <https://github.com/rabeehk> - not sure about that. You can\n> check the T5 notebooks we provide or https://discuss.huggingface.co/.\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/7956#issuecomment-714754836>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABP4ZCFCMFQC5USZT3FRAFLSMCLNZANCNFSM4S2MCL7Q>\n> .\n>\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,603 | 1,609 | 1,609 | NONE | null | Hi Everyone,
Is there an example showing how to run T5 on multiple datasets? Greatly appreciated.
thanks.
Best
Rabeeh | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7956/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7956/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7955 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7955/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7955/comments | https://api.github.com/repos/huggingface/transformers/issues/7955/events | https://github.com/huggingface/transformers/pull/7955 | 726,891,608 | MDExOlB1bGxSZXF1ZXN0NTA3ODg3MzQ2 | 7,955 | [pip/setup.py] target management and updates | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Also what does `extras[\"all\"]` stand for? Is it a vestige of something?\r\nhttps://github.com/huggingface/transformers/blob/master/setup.py#L94\r\n\r\nI'd put it last and really put everything into `all` unless I'm missing a special purpose here.",
"Does `transformers` really work `torch==1.0` or would we realistically need to set some higher 1.x minimum? I'm curious whether anybody tested this. But I guess there is no need to waste time on this - someone will flag this in time if it's a problem.\r\n",
"I think I prefer the long version because I can understand it without thinking, this one... not so much.\r\n\r\nI was also going to take a stab at the setup because we have a recurring complaint from Windows user they can't make a dev install (some of those dependencies like `faiss` should be dropped if on Windows).\r\n\r\nAlso `flax` is still experimental. Not sure it should be in dev just yet, especially since I'm doubtful about its Windows support. It shouldn't prevent people from developing and making PRs to the library.",
"Well, we could make it into a wrapper function so it'd be easier to read, but either way works.\r\n\r\nThen at the very least can we add `docs` to `dev`?\r\n\r\nwith hardcoded targets in `dev` - let's have just one definition where we put numerical requirements (==, !=, etc.)",
"Yes adding `docs` to `dev` is definitely useful.",
"would this be easier to read:\r\n```\r\ndef combine_targets(names):\r\n return list(chain(*map(extras.get, names)))\r\n\r\nextras[\"dev\"] = combine_targets(\"testing quality docs ja sklearn flax tf torch sentencepiece\".split())\r\n# or:\r\nextras[\"dev\"] = combine_targets([\"testing\", \"quality\", \"docs\", \"ja\", \"sklearn\", \"flax\", \"tf\", \"torch\", \"sentencepiece\"])\r\n```\r\nor you'd rather keep:\r\n```\r\nextras[\"dev\"] = extras[\"testing\"] + extras[\"quality\"] + extras[\"docs\"] + extras[\"flax\"] + extras[\"ja\"] + \\\r\n extras[\"sklearn\"] + extras[\"tf\"] + extras[\"torch\"] + extras[\"sentencepiece\"] \r\n```\r\n",
"I agree with @sgugger that the `list(chain(*map(...` is :dizzy_face:.\r\nThe way it's currently setup is fine by me, but your proposed fix:\r\n```py\r\nextras[\"dev\"] = combine_targets([\"testing\", \"quality\", \"docs\", \"ja\", \"sklearn\", \"flax\", \"tf\", \"torch\", \"sentencepiece\"])\r\n```\r\nis also fine by me.",
"@LysandreJik, please have another look - we discussed this with @sgugger on slack and expanded this further to be even more flexible - easy to read vertical listing plus built-in comments are now supported.",
"If it looks too busy we can merge the base groups into a dict, so it'll look less busy and will be more compact:\r\n\r\nSo instead of this:\r\n```\r\nextras[\"serving\"] = to_list(\"\"\"\r\n fastapi\r\n pydantic\r\n starlette\r\n uvicorn\r\n\"\"\")\r\n\r\nextras[\"sentencepiece\"] = to_list(\"\"\"\r\n sentencepiece!=0.1.92\r\n\"\"\")\r\n\r\nextras[\"retrieval\"] = to_list(\"\"\"\r\n datasets\r\n faiss-cpu\r\n\"\"\")\r\n\r\nextras[\"testing-base\"] = to_list(\"\"\"\r\n parameterized\r\n psutil\r\n pytest\r\n pytest-xdist\r\n timeout-decorator\r\n\"\"\")\r\n\r\n```\r\nit'd be:\r\n```\r\nextras = dict(\r\n serving=to_list(\"\"\"\r\n fastapi\r\n pydantic\r\n starlette # some explanation\r\n uvicorn\r\n\"\"\"),\r\n sentencepiece=to_list(\"\"\"\r\n sentencepiece!=0.1.92\r\n\"\"\"),\r\n retrieval=to_list(\"\"\"\r\n datasets\r\n faiss-cpu\r\n\"\"\"),\r\n testing-base=to_list(\"\"\"\r\n parameterized\r\n psutil\r\n pytest\r\n pytest-xdist # some comment\r\n timeout-decorator\r\n\"\"\"),\r\n)\r\n```\r\nActually, if we decide to go the dict way we can do all the processing later, why repeat the same function all the time, so it'd just leave:\r\n\r\n```\r\nextras = dict(\r\n serving=\"\"\"\r\n fastapi\r\n pydantic\r\n starlette # some explanation\r\n uvicorn\r\n\"\"\",\r\n sentencepiece=\"\"\"\r\n sentencepiece!=0.1.92\r\n\"\"\",\r\n retrieval=\"\"\"\r\n datasets\r\n faiss-cpu\r\n\"\"\",\r\n testing-base=\"\"\"\r\n parameterized\r\n psutil\r\n pytest\r\n pytest-xdist # some comment\r\n timeout-decorator\r\n\"\"\",\r\n)\r\nextras = process(extras) # not written yet.\r\n```",
"It's different, but it's consistent. You never need to read everything at once - you only would care about reading one entry - a group or a subgroup - this is not code but a table of definitions - like a spreadsheet. You can always squash the vertical entries into a horizontal line, by losing the readability and functionality offered by the spreadsheet-type of data. \r\n\r\nI proposed here a much more compact way: https://github.com/huggingface/transformers/pull/7955#issuecomment-714845771\r\n\r\nAlso the idea is to have just one base definition with the specific version if any and a comment why it is so if needed including the non-optional requirements. Otherwise it's too easy to forget to update multiple definitions of the same.\r\n\r\nThese are just different suggestions, please feel free to cherry pick some, all or none and close this PR as well. No hard feelings.",
"As I personally don't see any improvements regarding readability in the offered solutions, I would vote to keep the original approach. It's a personal preference choice so I'm willing to compromise if others disagree!\r\n\r\nAll the other changes in the PR look good to me.",
"Thank you for indicating that the proposed change is not fitting, @LysandreJik and @sgugger."
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | This PR
* [x] does a major revamp to how targets are specified and merged into groups. it lists all the dependencies vertically and allows you to write comments next to them, if needed - or completely comment them out
* [x] adds `docs` to `dev`, since we need to have the tools to run `make docs`
* [x] adds `flax` to `dev`, since we need to have the libs to run flax tests - except when on windows - it skips it then
* [x] brings `all` up-to-date
note, I removed the hardcoded `+ ["scikit-learn", "tensorflow", "torch", "sentencepiece!=0.1.92"]` and replaced with the full targets for `tf`, `torch`, etc., which include other deps. I'm not sure why we don't want all of them.
@LysandreJik, @sgugger, @thomwolf | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7955/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7955/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7955",
"html_url": "https://github.com/huggingface/transformers/pull/7955",
"diff_url": "https://github.com/huggingface/transformers/pull/7955.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7955.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/7954 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7954/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7954/comments | https://api.github.com/repos/huggingface/transformers/issues/7954/events | https://github.com/huggingface/transformers/issues/7954 | 726,833,255 | MDU6SXNzdWU3MjY4MzMyNTU= | 7,954 | TF: Faster to way to set one column/all but one column of a tensor to -inf | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834054694,
"node_id": "MDU6TGFiZWwxODM0MDU0Njk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/TensorFlow",
"name": "TensorFlow",
"color": "FF6F00",
"default": false,
"description": "Anything TensorFlow"
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Solution: https://stackoverflow.com/questions/64575346/tensorflow-set-column-of-tensor-to-infinity"
] | 1,603 | 1,604 | 1,604 | CONTRIBUTOR | null | in `_force_token_id_to_be_generated` we have much simpler torch code:
```python
scores[:, [x for if x != token_id]] = -float("inf")
```
Is it possible to make the TF Code simpler? TF doesn't support assignment, but maybe to and from `numpy` could be faster. Would definitely be simpler.
```python
@staticmethod
def _force_token_id_to_be_generated(scores, token_id) -> None:
"""force one of token_ids to be generated by setting prob of all other tokens to 0 (logprob=-float("inf"))"""
output_list = []
# Is there a better way to do in TF?
bs, vocab_size = scores.shape
inf_tensor = tf.convert_to_tensor([-float("inf")] * bs, dtype=scores.dtype)
for x in range(vocab_size):
if x != token_id:
output_list.append(inf_tensor)
else:
output_list.append(scores[:, x])
scores = tf.stack(output_list, axis=1, name="scores")
assert scores.shape == (bs, vocab_size)
return scores
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7954/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7954/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7953 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7953/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7953/comments | https://api.github.com/repos/huggingface/transformers/issues/7953/events | https://github.com/huggingface/transformers/issues/7953 | 726,825,608 | MDU6SXNzdWU3MjY4MjU2MDg= | 7,953 | 'EncoderDecoderModel' object has no attribute '_init_weights' after `model.resize_token_embeddings(len(tokenizer))` | {
"login": "XinXia2019",
"id": 46977022,
"node_id": "MDQ6VXNlcjQ2OTc3MDIy",
"avatar_url": "https://avatars.githubusercontent.com/u/46977022?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/XinXia2019",
"html_url": "https://github.com/XinXia2019",
"followers_url": "https://api.github.com/users/XinXia2019/followers",
"following_url": "https://api.github.com/users/XinXia2019/following{/other_user}",
"gists_url": "https://api.github.com/users/XinXia2019/gists{/gist_id}",
"starred_url": "https://api.github.com/users/XinXia2019/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/XinXia2019/subscriptions",
"organizations_url": "https://api.github.com/users/XinXia2019/orgs",
"repos_url": "https://api.github.com/users/XinXia2019/repos",
"events_url": "https://api.github.com/users/XinXia2019/events{/privacy}",
"received_events_url": "https://api.github.com/users/XinXia2019/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hello @XinXia2019, sadly `resize_token_embeddings` is not supported yet for `EncoderDecoderModel`. Instead you could just manually instantiate the encoder and decoder and apply `resize_token_embeddings` on each part before wrapping them into the `EncoderDecoderModel` framework.",
"Thanks!",
"(for anyone that might stumble upon this)\r\n\r\nI believe you can access the encoder and decoder models from the EncoderDecoderModel instance and resize their corresponding token embeddings, e.g.:\r\n\r\n```\r\n...\r\ntokenizer_length = len(tokenizer)\r\nmodel.encoder.resize_token_embeddings(tokenizer_length)\r\nmodel.decoder.resize_token_embeddings(tokenizer_length)\r\n...\r\n\r\n```"
] | 1,603 | 1,623 | 1,603 | NONE | null | ## Details
Version transformers==3.4.0
torch==1.6.0
torchvision==0.7.0
>>> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
>>>tokenizer.add_special_tokens({"additional_special_tokens": ['extra1', 'extra2']})
1
>>> model = EncoderDecoderModel.from_encoder_decoder_pretrained('bert-base-uncased', 'bert-base-uncased')
>>> model.resize_token_embeddings(len(tokenizer))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/xxia/anaconda3/lib/python3.8/site-packages/transformers/modeling_utils.py", line 607, in resize_token_embeddings
model_embeds = base_model._resize_token_embeddings(new_num_tokens)
File "/Users/xxia/anaconda3/lib/python3.8/site-packages/transformers/modeling_utils.py", line 622, in _resize_token_embeddings
new_embeddings = self._get_resized_embeddings(old_embeddings, new_num_tokens)
File "/Users/xxia/anaconda3/lib/python3.8/site-packages/transformers/modeling_utils.py", line 659, in _get_resized_embeddings
self._init_weights(new_embeddings)
File "/Users/xxia/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 771, in __getattr__
raise ModuleAttributeError("'{}' object has no attribute '{}'".format(
torch.nn.modules.module.ModuleAttributeError: 'EncoderDecoderModel' object has no attribute '_init_weights'
Thanks for help! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7953/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7953/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7952 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7952/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7952/comments | https://api.github.com/repos/huggingface/transformers/issues/7952/events | https://github.com/huggingface/transformers/pull/7952 | 726,823,092 | MDExOlB1bGxSZXF1ZXN0NTA3ODI5NzQ3 | 7,952 | model card for German Sentence Embeddings V2 | {
"login": "PhilipMay",
"id": 229382,
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilipMay",
"html_url": "https://github.com/PhilipMay",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Heya - Are there any concerns to merge this PR? Please let me know.\r\n\r\nMany thanks, Philip"
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | - new model card for "German RoBERTa for Sentence Embeddings V2"
- marked old model as outdated | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7952/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7952/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7952",
"html_url": "https://github.com/huggingface/transformers/pull/7952",
"diff_url": "https://github.com/huggingface/transformers/pull/7952.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7952.patch",
"merged_at": 1603464355000
} |
https://api.github.com/repos/huggingface/transformers/issues/7951 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7951/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7951/comments | https://api.github.com/repos/huggingface/transformers/issues/7951/events | https://github.com/huggingface/transformers/issues/7951 | 726,810,832 | MDU6SXNzdWU3MjY4MTA4MzI= | 7,951 | Unexpected/wrong handling of added special tokens in special_tokens_mask (GPT1, BERT, possibly others) | {
"login": "matejklemen",
"id": 17293960,
"node_id": "MDQ6VXNlcjE3MjkzOTYw",
"avatar_url": "https://avatars.githubusercontent.com/u/17293960?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/matejklemen",
"html_url": "https://github.com/matejklemen",
"followers_url": "https://api.github.com/users/matejklemen/followers",
"following_url": "https://api.github.com/users/matejklemen/following{/other_user}",
"gists_url": "https://api.github.com/users/matejklemen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/matejklemen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/matejklemen/subscriptions",
"organizations_url": "https://api.github.com/users/matejklemen/orgs",
"repos_url": "https://api.github.com/users/matejklemen/repos",
"events_url": "https://api.github.com/users/matejklemen/events{/privacy}",
"received_events_url": "https://api.github.com/users/matejklemen/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | open | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"> This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n> \n\nKeep it open",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"> This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n> \n> Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.\n\n👋",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"> This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\r\n> \r\n> Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.\r\n\r\nBump"
] | 1,603 | 1,621 | null | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.1.0
- Platform: Linux-5.4.0-52-generic-x86_64-with-glibc2.29
- Python version: 3.8.5
- PyTorch version (GPU?): 1.6.0+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: **No**
- Using distributed or parallel set-up in script?: **No**
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
Most appropriate seems @mfuntowicz (tokenization), blame says @thomwolf.
## Information
Model I am using (Bert, XLNet ...): OpenAI GPT (also BERT)
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
I am adding special tokens (`BOS`, `SEP` and `EOS`) to GPT1 tokenizer in order to format and fine-tune a GPT model a bit differently. I am also making use of the convenient `return_special_tokens_mask` argument in `encode_plus()`, though it does not seem to mark the added custom special tokens as special in the returned mask.
The same is also true when adding custom special tokens to BERT tokenizer. I did not check beyond these two.
The problem for GPT seems to be that `get_special_tokens_mask()` in `tokenization_utils.py` does not seem to take into account any special tokens.
```python
def get_special_tokens_mask(
self, token_ids_0: List, token_ids_1: Optional[List] = None, already_has_special_tokens: bool = False
) -> List[int]:
return [0] * ((len(token_ids_1) if token_ids_1 else 0) + len(token_ids_0))
```
For BERT, it only seems to take into account `[CLS]` and `[SEP]`.
## To reproduce
```python
from transformers import OpenAIGPTTokenizer
tokenizer = OpenAIGPTTokenizer.from_pretrained("openai-gpt")
tokenizer.add_special_tokens({
"bos_token": "<bos>",
"sep_token": "<sep>",
"eos_token": "<eos>"
})
# Does not work this way either
# tokenizer.add_special_tokens({
# "additional_special_tokens": ["<bos>", "<sep>", "<eos>"]
# })
encoded = tokenizer.encode_plus("<bos> State your name, rank and intention <sep> The Doctor, doctor, fun. <eos>",
return_special_tokens_mask=True)
print(encoded["input_ids"])
print(encoded["special_tokens_mask"]) # This returns all zeros
```
## Expected behavior
I would expect that the additional special tokens also get marked as special, i.e. that the `special_tokens_mask` in above snippet returns `[1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1]`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7951/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7951/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7950 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7950/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7950/comments | https://api.github.com/repos/huggingface/transformers/issues/7950/events | https://github.com/huggingface/transformers/issues/7950 | 726,755,907 | MDU6SXNzdWU3MjY3NTU5MDc= | 7,950 | Code bug in tokenization_utils.py? | {
"login": "liwei-cpp",
"id": 38450168,
"node_id": "MDQ6VXNlcjM4NDUwMTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/38450168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liwei-cpp",
"html_url": "https://github.com/liwei-cpp",
"followers_url": "https://api.github.com/users/liwei-cpp/followers",
"following_url": "https://api.github.com/users/liwei-cpp/following{/other_user}",
"gists_url": "https://api.github.com/users/liwei-cpp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liwei-cpp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liwei-cpp/subscriptions",
"organizations_url": "https://api.github.com/users/liwei-cpp/orgs",
"repos_url": "https://api.github.com/users/liwei-cpp/repos",
"events_url": "https://api.github.com/users/liwei-cpp/events{/privacy}",
"received_events_url": "https://api.github.com/users/liwei-cpp/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Pinging @mfuntowicz, @n1t0 for their opinions",
"I think we should delegate to the underlying tokenization algorithm and never strip like we do here. This what is done in `tokenizers` and thus one of the source of discrepancies between fast and slow tokenizers.\n\nFor most tokenizers this is not a problem since these white space get removed later anyway, but for some others (like gpt2) it is.\n\nNote: it is impossible to build tokenizers relying on the formatting with such rules. For example it would be impossible to tokenize some Python.\n\n(Cc @thomwolf)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,603 | 1,609 | 1,609 | NONE | null | In the function split_on_tokens in tokenization_utils.py, it contains the following logic:
if not text.strip():
return []
if not tok_list:
return self._tokenize(text)
So if the text contains just white space, '[]' will be returned. However, if the text contains white space and visible characters, the white space will be used in tokenization. For example, for the following code:
from transformers import GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
print(tokenizer("\nNorth")['input_ids']) # output [198, 14157], since 198 <-> \n and 14157 <-> North
print(tokenizer("\n")['input_ids']) # output [], even if 198 <-> \n
Is this behavior that we expected? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7950/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7950/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7949 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7949/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7949/comments | https://api.github.com/repos/huggingface/transformers/issues/7949/events | https://github.com/huggingface/transformers/pull/7949 | 726,735,385 | MDExOlB1bGxSZXF1ZXN0NTA3NzU1Mjg1 | 7,949 | fix 'encode_plus' docstring for 'special_tokens_mask' (0s and 1s were reversed) | {
"login": "epwalsh",
"id": 8812459,
"node_id": "MDQ6VXNlcjg4MTI0NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/8812459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/epwalsh",
"html_url": "https://github.com/epwalsh",
"followers_url": "https://api.github.com/users/epwalsh/followers",
"following_url": "https://api.github.com/users/epwalsh/following{/other_user}",
"gists_url": "https://api.github.com/users/epwalsh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/epwalsh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/epwalsh/subscriptions",
"organizations_url": "https://api.github.com/users/epwalsh/orgs",
"repos_url": "https://api.github.com/users/epwalsh/repos",
"events_url": "https://api.github.com/users/epwalsh/events{/privacy}",
"received_events_url": "https://api.github.com/users/epwalsh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | # What does this PR do?
Fixes the docstring for `encode_plus`.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger or anyone really.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7949/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7949/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7949",
"html_url": "https://github.com/huggingface/transformers/pull/7949",
"diff_url": "https://github.com/huggingface/transformers/pull/7949.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7949.patch",
"merged_at": 1603303065000
} |
https://api.github.com/repos/huggingface/transformers/issues/7948 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7948/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7948/comments | https://api.github.com/repos/huggingface/transformers/issues/7948/events | https://github.com/huggingface/transformers/issues/7948 | 726,685,551 | MDU6SXNzdWU3MjY2ODU1NTE= | 7,948 | Error: should have a 'get_encoder' function defined when running model.generate() | {
"login": "toomtobias",
"id": 73246285,
"node_id": "MDQ6VXNlcjczMjQ2Mjg1",
"avatar_url": "https://avatars.githubusercontent.com/u/73246285?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/toomtobias",
"html_url": "https://github.com/toomtobias",
"followers_url": "https://api.github.com/users/toomtobias/followers",
"following_url": "https://api.github.com/users/toomtobias/following{/other_user}",
"gists_url": "https://api.github.com/users/toomtobias/gists{/gist_id}",
"starred_url": "https://api.github.com/users/toomtobias/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/toomtobias/subscriptions",
"organizations_url": "https://api.github.com/users/toomtobias/orgs",
"repos_url": "https://api.github.com/users/toomtobias/repos",
"events_url": "https://api.github.com/users/toomtobias/events{/privacy}",
"received_events_url": "https://api.github.com/users/toomtobias/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Don't see that in the docs.\r\nthe docs use `BartForConditionalGeneration`.\r\nYou could also use `AutoModelForSeq2SeqLM`.\r\n\r\n"
] | 1,603 | 1,603 | 1,603 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.4.0
- Platform: Windows-10-10.0.17134-SP0
- Python version: 3.8.4
- PyTorch version (GPU?): 1.5.1+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: don't know
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.-->
@sshleifer
## Information
Model I am using (Bert, XLNet ...): facebook/bart-large-cnn
The problem arises when using:
* [X ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run the code below
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn")
model = AutoModel.from_pretrained("facebook/bart-large-cnn")
ARTICLE_TO_SUMMARIZE = "My friends are cool but they eat too many carbs."
inputs = tokenizer([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors='pt')
# Generate Summary
summary_ids = model.generate(inputs['input_ids'], num_beams=4, max_length=5, early_stopping=True)
```
Error i get:
>>> summary_ids = model.generate(inputs['input_ids'], num_beams=4, max_length=5, early_stopping=True)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\scbtoto\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\autograd\grad_mode.py", line 15, in decorate_context
return func(*args, **kwargs)
File "C:\Users\scbtoto\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\generation_utils.py", line 401, in generate
assert hasattr(self, "get_encoder"), "{} should have a 'get_encoder' function defined".format(self)
AssertionError: BartModel(
(shared): Embedding(50264, 1024, padding_idx=1)
(encoder): BartEncoder(
(embed_tokens): Embedding(50264, 1024, padding_idx=1)
(embed_positions): LearnedPositionalEmbedding(1026, 1024, padding_idx=1)
(layers): ModuleList(
(0): EncoderLayer(
(self_attn): Attention(
(k_proj): Linear(in_features=1024, out_features=1024, bias=True)
(v_proj): Linear(in_features=1024, out_features=1024, bias=True)
(q_proj): Linear(in_features=1024, out_features=1024, bias=True)
(out_proj): Linear(in_features=1024, out_features=1024, bias=True)
)
...
...
...
)
(layernorm_embedding): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
)
) should have a 'get_encoder' function defined
>>>
## Expected behavior
Im trying to run the basic example from the docs found here:
https://huggingface.co/transformers/model_doc/bart.html
i can run the code below without a problem so transformers should be prorperly installed according to the installations docs.
python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('we love you'))"
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7948/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7948/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7947 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7947/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7947/comments | https://api.github.com/repos/huggingface/transformers/issues/7947/events | https://github.com/huggingface/transformers/pull/7947 | 726,685,358 | MDExOlB1bGxSZXF1ZXN0NTA3NzExNzIx | 7,947 | [GPT2 batch generation] Make test clearer. `do_sample=True` is not deterministic. | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #7745
Small fix that deleted an unnecessary line from the test
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7947/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7947/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7947",
"html_url": "https://github.com/huggingface/transformers/pull/7947",
"diff_url": "https://github.com/huggingface/transformers/pull/7947.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7947.patch",
"merged_at": 1603299983000
} |
https://api.github.com/repos/huggingface/transformers/issues/7946 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7946/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7946/comments | https://api.github.com/repos/huggingface/transformers/issues/7946/events | https://github.com/huggingface/transformers/issues/7946 | 726,643,549 | MDU6SXNzdWU3MjY2NDM1NDk= | 7,946 | EncoderDecoderModel loss function | {
"login": "AI678",
"id": 63541083,
"node_id": "MDQ6VXNlcjYzNTQxMDgz",
"avatar_url": "https://avatars.githubusercontent.com/u/63541083?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AI678",
"html_url": "https://github.com/AI678",
"followers_url": "https://api.github.com/users/AI678/followers",
"following_url": "https://api.github.com/users/AI678/following{/other_user}",
"gists_url": "https://api.github.com/users/AI678/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AI678/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AI678/subscriptions",
"organizations_url": "https://api.github.com/users/AI678/orgs",
"repos_url": "https://api.github.com/users/AI678/repos",
"events_url": "https://api.github.com/users/AI678/events{/privacy}",
"received_events_url": "https://api.github.com/users/AI678/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, this depends on the decoder you use to initialize the encoder-decoder model. What decoder do you use?",
"I use 'bert-base-uncased'. just like this \r\nmodel = EncoderDecoderModel.from_encoder_decoder_pretrained('bert-base-uncased', 'bert-base-uncased')",
"I'm not sure this is the recommended way to load the models as it gives the following result:\r\n\r\n```\r\nSome weights of BertLMHeadModel were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['bert.encoder.layer.0.crossattention.self.query.weight', 'bert.encoder.layer.0.crossattention.self.query.bias', [...]\r\n```\r\nwith pretty much all model weights.\r\n\r\nWill ping @patrickvonplaten for advice.",
"Hey @AI678, \r\n\r\n1) The model should be initialized just as you did with\r\n\r\n```python\r\nmodel = EncoderDecoderModel.from_encoder_decoder_pretrained('bert-base-uncased', 'bert-base-uncased')\r\n```\r\n\r\nIt's normal that `None` of the cross-attention layers are initialized because BERT does not have any and they have to be fine-tuned down the road. \r\n\r\n2) To Train a Bert2Bert, you are also correct in doing: \r\n\r\n```python\r\noutputs = model(input_ids=src, attention_mask=mask, decoder_input_ids=dst, labels=dst, return_dict=True)\r\nloss, logits = outputs.loss, outputs.logits\r\n```\r\n\r\nbecause BERT automatically shifts the labels for you, see: https://github.com/huggingface/transformers/blob/901e9b8eda2fe88af717f960ddc05cac1803679b/src/transformers/modeling_bert.py#L1060\r\n\r\nAlso I'll publish a more in-detail notebook about \"Leveraging Encoder-Decoder models\" soon. This model card could also be helpful: https://huggingface.co/patrickvonplaten/bert2bert-cnn_dailymail-fp16#bert2bert-summarization-with-%F0%9F%A4%97-encoderdecoder-framework\r\n",
"thank you very much"
] | 1,603 | 1,603 | 1,603 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
Hey , I want to ask the following questions.
How is the loss calculated in DecoderEncoderModel. What is the mathematic formula of the loss function ?
I just wrote the code like this
outputs = model(input_ids=src, attention_mask=mask, decoder_input_ids=dst, labels=dst, return_dict=True)
loss, logits = outputs.loss, outputs.logits
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7946/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7946/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7945 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7945/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7945/comments | https://api.github.com/repos/huggingface/transformers/issues/7945/events | https://github.com/huggingface/transformers/pull/7945 | 726,582,386 | MDExOlB1bGxSZXF1ZXN0NTA3NjIyODg0 | 7,945 | Move NoLayerEmbedTokens | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"`TFWrappedEmbeddings` is definitely way better, but as a user, I still don't understand what this means. Do you think we could add a comment where it's used? Maybe something along the lines of:\r\n\r\n```\r\n# Wraps layer to avoid problems with weight restoring and ensuring we're in the correct TF scope.\r\n```",
"Done, thanks for writing it out!"
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | As agreed upon with @patrickvonplaten , this moves the very useful, very model agnostic, `NoLayerEmbedTokens` to `modeling_tf_utils.py`, where it can be used by `TFBart` and `TFT5`.
`TFProphetNet` and other seq2seq may also eventually need. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7945/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7945/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7945",
"html_url": "https://github.com/huggingface/transformers/pull/7945",
"diff_url": "https://github.com/huggingface/transformers/pull/7945.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7945.patch",
"merged_at": 1603397629000
} |
https://api.github.com/repos/huggingface/transformers/issues/7944 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7944/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7944/comments | https://api.github.com/repos/huggingface/transformers/issues/7944/events | https://github.com/huggingface/transformers/pull/7944 | 726,522,230 | MDExOlB1bGxSZXF1ZXN0NTA3NTc0MDcz | 7,944 | [ProphetNet] Correct Doc string example | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Can we have the examples take less than 119 chars (i'd even settle for 200 honestly)?\r\n\r\n",
"> Can we have the examples take less than 119 chars (i'd even settle for 200 honestly)?\r\n\r\nCan I break lines while using `>>>` ? Or just use a smaller input text? ",
"> Can I break lines while using `>>>` ? Or just use a smaller input text?\r\n\r\nYou use `... ` instead of `>>> ` for the intermediate lines, but yes you can. See the [quicktour](https://github.com/huggingface/transformers/blob/master/docs/source/quicktour.rst) for an example (scroll down to \"That's encouraging! You can use it on a list of sentences\" since GitHub doesn't let me link a specific line in a rst file).",
"> Thanks :-)\r\n\r\nMy Pylinter doesn't pick up the docstring :-/ will have to find a way to fix this. Sorry for all those long lines in the docs "
] | 1,603 | 1,603 | 1,603 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7944/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7944/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7944",
"html_url": "https://github.com/huggingface/transformers/pull/7944",
"diff_url": "https://github.com/huggingface/transformers/pull/7944.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7944.patch",
"merged_at": 1603294041000
} |
https://api.github.com/repos/huggingface/transformers/issues/7943 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7943/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7943/comments | https://api.github.com/repos/huggingface/transformers/issues/7943/events | https://github.com/huggingface/transformers/pull/7943 | 726,384,868 | MDExOlB1bGxSZXF1ZXN0NTA3NDU3MjQy | 7,943 | [PretrainedConfig] Fix save pretrained config for edge case | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Any reason not to look at just the config class? At a first glance, I'd say we want to compare the defaults to the class we instantiated, not to the superclass `PretrainedConfig`.",
"> Any reason not to look at just the config class? At a first glance, I'd say we want to compare the defaults to the class we instantiated, not to the superclass `PretrainedConfig`.\r\n\r\nBack then this was my initial idea as well - but then the configs could be more or less emtpy if all parameters are the same. This has a couple of disadvantages:\r\n- When looking at the config online people cannot see any parameters and would have to look into the code which might be annoying\r\n- This would make the configs much more prone to break if the init values of respective classes are changed.",
"**UPDATE**: I had to add a class attribute to the config to make this feature work (see description above) - @julien-c @sgugger @thomwolf @LysandreJik - could you check if this is fine for you guys.",
"LGTM"
] | 1,603 | 1,603 | 1,603 | MEMBER | null | # What does this PR do?
There is an edge case for which the "diff" save method for `PretrainedConfig` fails. We decided a while ago in this PR: https://github.com/huggingface/transformers/pull/3797 that we wanted to have more readable configs and thus tweaked the `save_pretrained()` method so that only parameters that are different to the default **PretrainedConfig** class are serialized.
There was an edge case we did not consider:
If a parameter, like `add_cross_attention` defaults to `True` in `ProphetNetConfig`, but is by default `False` in `PretrainedConfig` a problem can arise when a user wants to save `add_cross_attention=False` in his `ProphetNetConfig`. Because `add_cross_attention=False` corresponds to the `PretrainedConfig` default case, this parameter will not be serialized and thus when reloading the config, the parameter defaults to `ProphetNetConfig` which is `True` and which is then an error.
This PR fixes this behavior by simply making sure that a parameter is only **not** saved if it is equal to both `PretrainedConfig` and `ProphetNetConfig`.
This feature requires configs to be instantiated without providing any parameters. This is currently not possible for the `EncoderDecoderModelConfig` and `RagConfig` because those configs are composed of multiple sub-configs which have to be provided. => A new class attribute `is_composition` is added to correctly handle these classes.
Two tests are added.
Also cc @stas00 for FSTM config.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7943/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7943/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7943",
"html_url": "https://github.com/huggingface/transformers/pull/7943",
"diff_url": "https://github.com/huggingface/transformers/pull/7943.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7943.patch",
"merged_at": 1603373942000
} |
https://api.github.com/repos/huggingface/transformers/issues/7942 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7942/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7942/comments | https://api.github.com/repos/huggingface/transformers/issues/7942/events | https://github.com/huggingface/transformers/pull/7942 | 726,294,063 | MDExOlB1bGxSZXF1ZXN0NTA3MzgyMjUz | 7,942 | [ProphetNet] Add Question Generation Model + Test | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | MEMBER | null | Thanks a lot for provided the model @qiweizhen ! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7942/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7942/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7942",
"html_url": "https://github.com/huggingface/transformers/pull/7942",
"diff_url": "https://github.com/huggingface/transformers/pull/7942.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7942.patch",
"merged_at": 1603273799000
} |
https://api.github.com/repos/huggingface/transformers/issues/7941 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7941/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7941/comments | https://api.github.com/repos/huggingface/transformers/issues/7941/events | https://github.com/huggingface/transformers/pull/7941 | 726,281,029 | MDExOlB1bGxSZXF1ZXN0NTA3MzcxNzAw | 7,941 | [RAG] Handle the case when title is None while loading own datasets | {
"login": "lalitpagaria",
"id": 19303690,
"node_id": "MDQ6VXNlcjE5MzAzNjkw",
"avatar_url": "https://avatars.githubusercontent.com/u/19303690?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lalitpagaria",
"html_url": "https://github.com/lalitpagaria",
"followers_url": "https://api.github.com/users/lalitpagaria/followers",
"following_url": "https://api.github.com/users/lalitpagaria/following{/other_user}",
"gists_url": "https://api.github.com/users/lalitpagaria/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lalitpagaria/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lalitpagaria/subscriptions",
"organizations_url": "https://api.github.com/users/lalitpagaria/orgs",
"repos_url": "https://api.github.com/users/lalitpagaria/repos",
"events_url": "https://api.github.com/users/lalitpagaria/events{/privacy}",
"received_events_url": "https://api.github.com/users/lalitpagaria/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@lhoestq Can you please check"
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
While loading our own datasets from CSV: `title` and `text` can be `None`.
These `None` value cause issue with DPR tokenizer, hence this PR handle these cases -
1) When `text` is None then skip that record
2) When. `title` is None then use empty `string`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@LysandreJik, @patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7941/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7941/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7941",
"html_url": "https://github.com/huggingface/transformers/pull/7941",
"diff_url": "https://github.com/huggingface/transformers/pull/7941.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7941.patch",
"merged_at": 1603461286000
} |
https://api.github.com/repos/huggingface/transformers/issues/7940 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7940/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7940/comments | https://api.github.com/repos/huggingface/transformers/issues/7940/events | https://github.com/huggingface/transformers/issues/7940 | 726,161,843 | MDU6SXNzdWU3MjYxNjE4NDM= | 7,940 | Access bert output with output_hidden_states=True of TFBertForSequenceClassification fails | {
"login": "datistiquo",
"id": 47474379,
"node_id": "MDQ6VXNlcjQ3NDc0Mzc5",
"avatar_url": "https://avatars.githubusercontent.com/u/47474379?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/datistiquo",
"html_url": "https://github.com/datistiquo",
"followers_url": "https://api.github.com/users/datistiquo/followers",
"following_url": "https://api.github.com/users/datistiquo/following{/other_user}",
"gists_url": "https://api.github.com/users/datistiquo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/datistiquo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/datistiquo/subscriptions",
"organizations_url": "https://api.github.com/users/datistiquo/orgs",
"repos_url": "https://api.github.com/users/datistiquo/repos",
"events_url": "https://api.github.com/users/datistiquo/events{/privacy}",
"received_events_url": "https://api.github.com/users/datistiquo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! Could you provide your software versions so that I may investigate?\r\n\r\nI get an error earlier: `print(bert_model.get_layer(\"bert\").output)`:\r\n```\r\nTraceback (most recent call last):\r\n File \"<input>\", line 4, in <module>\r\n File \"/home/lysandre/Workspaces/Python/transformers/.env/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py\", line 2105, in output\r\n raise AttributeError('Layer ' + self.name + ' has no inbound nodes.')\r\nAttributeError: Layer bert has no inbound nodes.\r\n```",
"Actually I found weird things, because previously it worked, but after a windows update I think my version of the transformer librry was set back somehow. Because I used version 3.0.2, and there there was no output at all of the hidden states. But now for the latest stable version (3.4) it works. ",
"Glad to hear it!"
] | 1,603 | 1,603 | 1,603 | NONE | null | Hey,
I want to access the output of the main bert model inside the TFBertForSequenceClassification model with output_hidden_states :
`bert_model = TFBertForSequenceClassification.from_pretrained('bert-base-german-cased', output_hidden_states=True)
`
then
```
print(bert_model.summary())
print(bert_model.get_layer("bert").output)
print(bert_model.layers[0].output[2]) ->yields error
```
bert_model.get_layer("bert").output gives just the 2 outputs for last_hidden_state and pooled_output but the hiden_states are missing.
Why are the hidden_sates not available though I set output_hidden_states=True ?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7940/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7940/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7939 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7939/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7939/comments | https://api.github.com/repos/huggingface/transformers/issues/7939/events | https://github.com/huggingface/transformers/pull/7939 | 725,980,243 | MDExOlB1bGxSZXF1ZXN0NTA3MTI2MTA1 | 7,939 | Fix BatchEncoding.word_to_tokens for removed tokens | {
"login": "n1t0",
"id": 1217986,
"node_id": "MDQ6VXNlcjEyMTc5ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1217986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/n1t0",
"html_url": "https://github.com/n1t0",
"followers_url": "https://api.github.com/users/n1t0/followers",
"following_url": "https://api.github.com/users/n1t0/following{/other_user}",
"gists_url": "https://api.github.com/users/n1t0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/n1t0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/n1t0/subscriptions",
"organizations_url": "https://api.github.com/users/n1t0/orgs",
"repos_url": "https://api.github.com/users/n1t0/repos",
"events_url": "https://api.github.com/users/n1t0/events{/privacy}",
"received_events_url": "https://api.github.com/users/n1t0/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | MEMBER | null | Fixes https://github.com/huggingface/tokenizers/issues/343
Copied from issue on `tokenizers` repo:
> I'm working with pre-tokenized data (UD-Treebanks) for a sequence-tagging task, since I don't want to inflate the importance of a training example based on the number of word-pieces the token gets split into, I need to map the labels to only the first word-piece of a token.
>
> To achieve this, I was iterating over the words in the original sentence as taken from the treebank and used the word_to_tokens method with the offset of the word in the sentence to get the corresponding token span. If words simply vanish from the sentence, then at first the offsets become invalid and at the final word of the sequence an exception is raised because there's no offset for disappearing words in the sequence.
This notebook demontrates the issue:
https://colab.research.google.com/drive/139mVXMQ7jZBBoTpgkribgVpOu6W1u8e9?usp=sharing
~~~Py
import transformers
import torch
tokenizer = transformers.AutoTokenizer.from_pretrained("bert-base-multilingual-cased", use_fast=True)
batch = [["Test", "\xad", "test"]]
encoded_batch = tokenizer.batch_encode_plus(
batch,
padding=True,
is_pretokenized=True,
return_tensors='pt',
truncation=True)
first_pieces = torch.zeros_like(encoded_batch.attention_mask, dtype=torch.bool)
for row, sentence in enumerate(batch):
for col, token in enumerate(sentence):
idx = encoded_batch.word_to_tokens(row, col)[0] # this method raises the exception
first_pieces[row, idx] = True
~~~ | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7939/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7939/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7939",
"html_url": "https://github.com/huggingface/transformers/pull/7939",
"diff_url": "https://github.com/huggingface/transformers/pull/7939.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7939.patch",
"merged_at": 1603463378000
} |
https://api.github.com/repos/huggingface/transformers/issues/7938 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7938/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7938/comments | https://api.github.com/repos/huggingface/transformers/issues/7938/events | https://github.com/huggingface/transformers/pull/7938 | 725,978,079 | MDExOlB1bGxSZXF1ZXN0NTA3MTI0MzUz | 7,938 | PPL guide code snippet minor fix | {
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"repos_url": "https://api.github.com/users/joeddav/repos",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | # What does this PR do?
Minor fix to the code snippet in the [perplexity guide](https://huggingface.co/transformers/perplexity.html), as discussed in [this thread](https://discuss.huggingface.co/t/guide-the-best-way-to-calculate-the-perplexity-of-fixed-length-models/193).
Previously the snippet didn't take into account the length of the the last loop over the data, which can be shorter than the specified `stride` length. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7938/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7938/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7938",
"html_url": "https://github.com/huggingface/transformers/pull/7938",
"diff_url": "https://github.com/huggingface/transformers/pull/7938.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7938.patch",
"merged_at": 1603232260000
} |
https://api.github.com/repos/huggingface/transformers/issues/7937 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7937/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7937/comments | https://api.github.com/repos/huggingface/transformers/issues/7937/events | https://github.com/huggingface/transformers/issues/7937 | 725,912,113 | MDU6SXNzdWU3MjU5MTIxMTM= | 7,937 | Your example code for WNUT NER produces array indexing ValueError | {
"login": "githubrandomuser2017",
"id": 25097908,
"node_id": "MDQ6VXNlcjI1MDk3OTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/25097908?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/githubrandomuser2017",
"html_url": "https://github.com/githubrandomuser2017",
"followers_url": "https://api.github.com/users/githubrandomuser2017/followers",
"following_url": "https://api.github.com/users/githubrandomuser2017/following{/other_user}",
"gists_url": "https://api.github.com/users/githubrandomuser2017/gists{/gist_id}",
"starred_url": "https://api.github.com/users/githubrandomuser2017/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/githubrandomuser2017/subscriptions",
"organizations_url": "https://api.github.com/users/githubrandomuser2017/orgs",
"repos_url": "https://api.github.com/users/githubrandomuser2017/repos",
"events_url": "https://api.github.com/users/githubrandomuser2017/events{/privacy}",
"received_events_url": "https://api.github.com/users/githubrandomuser2017/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi,\r\nnot a HuggingFace developer but I came across the same problem. I think this is this is due to the fact that the Tokenizer is truncating sequences longer than 64 so there is a mismatch in length between `tags` and `encodings`. This is also why it's fixed when you increase the max_lenght. Another reason may be that some characters in your sentences are not properly decoded because of wrong charset detection. I hope this helps.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I am also facing this issue. I am using custom dataset and haven't passed any max_length argument to the tokenizer. \r\n\r\nAny idea how to fix this ? But same piece of code works well on W-NUT dataset",
"> Hi,\r\n> not a HuggingFace developer but I came across the same problem. I think this is this is due to the fact that the Tokenizer is truncating sequences longer than 64 so there is a mismatch in length between `tags` and `encodings`. This is also why it's fixed when you increase the max_lenght. Another reason may be that some characters in your sentences are not properly decoded because of wrong charset detection. I hope this helps.\r\n\r\nI observed that in the notebook shared by Hugging face for W-Nut dataset either, the tags and encodings length (for each record) are not same. So hoping that shouldn't be the issue. ",
"@joeddav I am facing the same issue when switching to another dataset, what could be the problem? the behavior continues even with setting `max_length=None`",
"For me the error occurred using the example code in combination with a sentence piece tokenizer (e.g. XLM-RoBERTa). Switching to the updated code used in the run_ner.py script (https://github.com/huggingface/transformers/blob/ad072e852816cd32547504c2eb018995550b126a/examples/token-classification/run_ner.py) solved the issue for me. ",
"I figured out the problem. A typical input instance has `N` tokens and `N` NER tags with a one-to-one correspondence. When you pass in the sentence to the tokenizer, it will add `k` more tokens for either (1) subword tokens (e.g. `##ing`) or (2) special model-specific tokens (e.g. `[CLS]` or `[SEP]`. So now you have `N+k` tokens and `N` NER tags.\r\n\r\nIf you apply a max length truncation (e.g. `64`), then those `N+k` tokens will get truncated to `64`, leaving an unpredictable mix of valid tokens and special tokens because both types of tokens may have been truncated. However, there are still `N` NER tags which may not match up against valid tokens because the latter may have been truncated.\r\n\r\nI fixed the problem by one of several approaches:\r\n\r\n1. Removing data instances that are problematically long. For example, I removed sentences which had more than 45 tokens. Using Pandas really help out here.\r\n2. Increasing the truncation length to, say, 128, or whatever number that's longer than any `N+k`. However, this increase forces me to reduce my batch size due to GPU memory constraints.",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread.",
"I solved the issue by replacing\r\n```python\r\ndoc_enc_labels[(arr_offset[:,0] == 0) & (arr_offset[:,1] != 0)] = doc_labels\r\nencoded_labels.append(doc_enc_labels.tolist())\r\n```\r\n\r\nwith\r\n\r\n```python\r\nmask = (arr_offset[:, 0] == 0) & (arr_offset[:, 1] != 0)\r\ndoc_enc_labels[mask] = doc_labels[:np.sum(mask)]\r\nencoded_labels.append(doc_enc_labels.tolist())\r\n```\r\n\r\nBy this way, it will only map the first `np.sum(mask)` true indices of `doc_labels` in case of any indexing problem. I am a newbie 🤗 Transformers user, and I wonder if this solution may cause any problems.",
"I have this same issue but \r\n`mask = (arr_offset[:, 0] == 0) & (arr_offset[:, 1] != 0)\r\ndoc_enc_labels[mask] = doc_labels[:np.sum(mask)]\r\nencoded_labels.append(doc_enc_labels.tolist())\r\n`\r\ndid not work after the first encoded_labels run",
"Guys if the example has issues, why even put it out there and have us chaise our tails?",
"Hey! The example is currently being rewritten here by @stevhliu: https://github.com/huggingface/transformers/pull/13923",
"@LysandreJik Thanks for revisiting this problem. I feel that aligning tokens, token labels, and sub-world pieces is too complex for users of the library to implement themselves. Can you (HuggingFace) please provide some utility functions to make this task easier?",
"Hi @githubrandomuser2017, the examples we provide showcase exactly how to do that, for example here: https://github.com/huggingface/transformers/blob/master/examples/pytorch/token-classification/run_ner.py#L370-L404\r\n\r\nDoes this utility function help you out? ",
"PR #13923 was merged with the new version of this example. Closing this issue, feel free to reopen/comment if the issue arises again.",
"@LysandreJik \r\n> Hi @githubrandomuser2017, the examples we provide showcase exactly how to do that, for example here: https://github.com/huggingface/transformers/blob/master/examples/pytorch/token-classification/run_ner.py#L370-L404\r\n> \r\n> Does this utility function help you out?\r\n\r\nI'll let other users chime in."
] | 1,603 | 1,636 | 1,636 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.4
- Platform: Google Colab
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0
- Tensorflow version (GPU?):
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@stefan-it, @sgugger
## Information
Model I am using (Bert, XLNet ...): DistilBERT
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
I'm trying to run the example code Advanced Guides --> Fine-tuning with custom datasets --> [Token Classification with W-NUT Emerging Entities](https://huggingface.co/transformers/custom_datasets.html#token-classification-with-w-nut-emerging-entities).
Steps to reproduce the behavior:
1. I already have a [Google CoLab notebook with your code](https://colab.research.google.com/drive/1i5N7Xc-i91bqXmcp9hamt5q3a_5a-ZnZ?usp=sharing).
2. I use the `tokenizer` with `max_length=64`, which is typically my "best practice" choice. Note that if I set `max_length=None`, everything runs successfully.
```python
max_length = 64
encodings = tokenizer(texts, is_split_into_words=True, max_length=max_length, return_offsets_mapping=True, padding=True, truncation=True)
```
3. When I run `encode_tags()` on the WNUT data, I get a ValueError
```python
labels = encode_tags(tags, encodings)
11 # set labels whose first offset position is 0 and the second is not 0
---> 12 doc_enc_labels[(arr_offset[:,0] == 0) & (arr_offset[:,1] != 0)] = doc_labels
13 encoded_labels.append(doc_enc_labels.tolist())
14
ValueError: NumPy boolean array indexing assignment cannot assign 29 input values to the 24 output values where the mask is true
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I expect that `encode_tags()` should return the correct IOB tag labels when I run your `Tokenizer` with a `max_length=64`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7937/reactions",
"total_count": 15,
"+1": 12,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 1,
"heart": 0,
"rocket": 0,
"eyes": 2
} | https://api.github.com/repos/huggingface/transformers/issues/7937/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7936 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7936/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7936/comments | https://api.github.com/repos/huggingface/transformers/issues/7936/events | https://github.com/huggingface/transformers/issues/7936 | 725,827,360 | MDU6SXNzdWU3MjU4MjczNjA= | 7,936 | cannot load customized tokenizer with modified vocabulary | {
"login": "CharizardAcademy",
"id": 20318555,
"node_id": "MDQ6VXNlcjIwMzE4NTU1",
"avatar_url": "https://avatars.githubusercontent.com/u/20318555?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CharizardAcademy",
"html_url": "https://github.com/CharizardAcademy",
"followers_url": "https://api.github.com/users/CharizardAcademy/followers",
"following_url": "https://api.github.com/users/CharizardAcademy/following{/other_user}",
"gists_url": "https://api.github.com/users/CharizardAcademy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CharizardAcademy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CharizardAcademy/subscriptions",
"organizations_url": "https://api.github.com/users/CharizardAcademy/orgs",
"repos_url": "https://api.github.com/users/CharizardAcademy/repos",
"events_url": "https://api.github.com/users/CharizardAcademy/events{/privacy}",
"received_events_url": "https://api.github.com/users/CharizardAcademy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,603 | 1,609 | 1,609 | NONE | null | I have customized a tokenizer saved as tokenization_new.py and modified the vocab.txt from s3 server. I tried
`from transformers import NewBertTokenizer`
`tokenizer = NewBertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)`
where I modified `PRETRAINED_VOCAB_FILES_MAP = {"vocab_file":{"bert-base-uncased": /vocab/vocab.txt},}`
where `/vocab/` is a directory parallel to tokenization_new.py that contains my customized vocabulary. However, I got an error raised as
`Model name 'bert-base-uncased' was not found in tokenizers model name list (bert-base-uncased). We assumed 'bert-base-uncased' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.`
What should I do to use my own customized tokenizer?
Thanks for the help! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7936/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7936/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7935 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7935/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7935/comments | https://api.github.com/repos/huggingface/transformers/issues/7935/events | https://github.com/huggingface/transformers/pull/7935 | 725,800,379 | MDExOlB1bGxSZXF1ZXN0NTA2OTc3NDY2 | 7,935 | TensorBoard/Wandb/optuna/raytune integration improvements. | {
"login": "madlag",
"id": 272253,
"node_id": "MDQ6VXNlcjI3MjI1Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/272253?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/madlag",
"html_url": "https://github.com/madlag",
"followers_url": "https://api.github.com/users/madlag/followers",
"following_url": "https://api.github.com/users/madlag/following{/other_user}",
"gists_url": "https://api.github.com/users/madlag/gists{/gist_id}",
"starred_url": "https://api.github.com/users/madlag/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/madlag/subscriptions",
"organizations_url": "https://api.github.com/users/madlag/orgs",
"repos_url": "https://api.github.com/users/madlag/repos",
"events_url": "https://api.github.com/users/madlag/events{/privacy}",
"received_events_url": "https://api.github.com/users/madlag/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | Improves TensorBoard logging by grouping train / eval metrics as it is usually done in TensorBoard.
Improves TensorBoard/optuna model hyper-parameters logging.
Improves optuna and Ray/tune integration, and provides model hyper-parameter naming.
Test (and sample code) is provided in test_trainer.TrainerHyperParameterIntegrationTest .
Some more work may be need to harmonize metrics naming for eval / train, as the "eval_" prefix used is not very convenient, using a "eval/" prefix would be more foolproof, and consistent with TensorBoard usage, but it would break quite some code, and so may be done in a separate PR.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7935/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7935/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7935",
"html_url": "https://github.com/huggingface/transformers/pull/7935",
"diff_url": "https://github.com/huggingface/transformers/pull/7935.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7935.patch",
"merged_at": 1603293532000
} |
https://api.github.com/repos/huggingface/transformers/issues/7934 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7934/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7934/comments | https://api.github.com/repos/huggingface/transformers/issues/7934/events | https://github.com/huggingface/transformers/pull/7934 | 725,782,061 | MDExOlB1bGxSZXF1ZXN0NTA2OTYyMzcx | 7,934 | [s2s] create doc for pegasus/fsmt replication | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | This PR:
* created a dedicated doc for getting eval data
* move the existing entries to the new doc
* add FSMT
* add pegasus
@sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7934/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7934/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7934",
"html_url": "https://github.com/huggingface/transformers/pull/7934",
"diff_url": "https://github.com/huggingface/transformers/pull/7934.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7934.patch",
"merged_at": 1603220872000
} |
https://api.github.com/repos/huggingface/transformers/issues/7933 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7933/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7933/comments | https://api.github.com/repos/huggingface/transformers/issues/7933/events | https://github.com/huggingface/transformers/pull/7933 | 725,779,991 | MDExOlB1bGxSZXF1ZXN0NTA2OTYwNzAw | 7,933 | Fix comet_ml import and add ensure availability | {
"login": "dsblank",
"id": 168568,
"node_id": "MDQ6VXNlcjE2ODU2OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/168568?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dsblank",
"html_url": "https://github.com/dsblank",
"followers_url": "https://api.github.com/users/dsblank/followers",
"following_url": "https://api.github.com/users/dsblank/following{/other_user}",
"gists_url": "https://api.github.com/users/dsblank/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dsblank/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dsblank/subscriptions",
"organizations_url": "https://api.github.com/users/dsblank/orgs",
"repos_url": "https://api.github.com/users/dsblank/repos",
"events_url": "https://api.github.com/users/dsblank/events{/privacy}",
"received_events_url": "https://api.github.com/users/dsblank/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"FYI: @stas00 ",
"> but I agree with @stas00: I wouldn't want to see warnings about cometml or wandb if I don't have the libraries installed.\r\n\r\nThis is not the case already. We were talking about the odd case where some other package installed one of these as its auto-dependencies. So the user now unwittingly needs to figure out why in the world she needs to get an API key for something she didn't ask for in first place.\r\n\r\nUnfortunately I am forced to reset my conda env a lot recently, so I lost the one where this exact scenario has happened, so at the moment I can't point the guilty finger at which package installed `cometml` without me doing so intentionally/directly. If it happens again I will report back.\r\n\r\nOtherwise all is good.",
"Resolved merge conflicts. Should be ready to go.",
"Thanks!"
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | # What does this PR do?
1. Adds a better check to make sure comet_ml is ready to use
2. Moves the integration imports above the ML imports. This is required to use comet_ml
The current version 3.4.0 is broken, and can not be used with comet_ml without a workaround. This PR fixes that.
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Trainer: @sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7933/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7933/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7933",
"html_url": "https://github.com/huggingface/transformers/pull/7933",
"diff_url": "https://github.com/huggingface/transformers/pull/7933.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7933.patch",
"merged_at": 1603798268000
} |
https://api.github.com/repos/huggingface/transformers/issues/7932 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7932/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7932/comments | https://api.github.com/repos/huggingface/transformers/issues/7932/events | https://github.com/huggingface/transformers/issues/7932 | 725,775,417 | MDU6SXNzdWU3MjU3NzU0MTc= | 7,932 | Addition of MMI-antiLM decoding | {
"login": "AADeLucia",
"id": 13154289,
"node_id": "MDQ6VXNlcjEzMTU0Mjg5",
"avatar_url": "https://avatars.githubusercontent.com/u/13154289?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AADeLucia",
"html_url": "https://github.com/AADeLucia",
"followers_url": "https://api.github.com/users/AADeLucia/followers",
"following_url": "https://api.github.com/users/AADeLucia/following{/other_user}",
"gists_url": "https://api.github.com/users/AADeLucia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AADeLucia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AADeLucia/subscriptions",
"organizations_url": "https://api.github.com/users/AADeLucia/orgs",
"repos_url": "https://api.github.com/users/AADeLucia/repos",
"events_url": "https://api.github.com/users/AADeLucia/events{/privacy}",
"received_events_url": "https://api.github.com/users/AADeLucia/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,603 | 1,610 | 1,610 | NONE | null | # 🚀 Feature request
Hugging Face does a great job of including popular decoding strategies such as nucleus sampling, top-k, and temperature. There are also other really interesting decoding strategies for chatbots to fix the response "blandness" or "I don't know" problem, such as using the **Maximum Mutual Information anti-Language Model objective (MMI anti-LM)**. The algorithm is defined in [A Diversity-Promoting Objective Function for Neural Conversation Models](https://www.aclweb.org/anthology/N16-1014.pdf).
## Motivation
I'm requesting this as a feature because I used this in a narrative generation paper. From this point forward I will be using "we" to reference my co-authors (@XiangLi1999 also contributed to the code) and I. The results of our work show that antiLM decoding does in fact help make the generated output more interesting without hurting fluency. Our work is [Decoding Methods for Neural Narrative Generation](https://arxiv.org/abs/2010.07375) and the rest of our code is in our [paper repo](https://github.com/AADeLucia/gpt2-narrative-decoding).
We think others would also be interested in using this decoding method for their work.
## Your contribution
We have a working implementation in a hacked version of the `generation_utils.py` file. It's not pretty (sorry) but maybe a good starting point? The code is in PR #7931.
Also the author's implementation (not huggingface-based) is in [Jiwei Li's repo](https://github.com/jiweil/Neural-Dialogue-Generation). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7932/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/7932/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7931 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7931/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7931/comments | https://api.github.com/repos/huggingface/transformers/issues/7931/events | https://github.com/huggingface/transformers/pull/7931 | 725,774,350 | MDExOlB1bGxSZXF1ZXN0NTA2OTU2MTI5 | 7,931 | MMI-antiLM decoding | {
"login": "AADeLucia",
"id": 13154289,
"node_id": "MDQ6VXNlcjEzMTU0Mjg5",
"avatar_url": "https://avatars.githubusercontent.com/u/13154289?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AADeLucia",
"html_url": "https://github.com/AADeLucia",
"followers_url": "https://api.github.com/users/AADeLucia/followers",
"following_url": "https://api.github.com/users/AADeLucia/following{/other_user}",
"gists_url": "https://api.github.com/users/AADeLucia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AADeLucia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AADeLucia/subscriptions",
"organizations_url": "https://api.github.com/users/AADeLucia/orgs",
"repos_url": "https://api.github.com/users/AADeLucia/repos",
"events_url": "https://api.github.com/users/AADeLucia/events{/privacy}",
"received_events_url": "https://api.github.com/users/AADeLucia/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This technique looks really cool! Unfortunately running `generate` with two models will break lots of our assumptions, so maybe you could write a standalone:\r\n\r\n```python\r\ndef generate_anti_lm(model, lm_model, **kwargs):\r\n\t... logic ...\r\n return generated_ids\r\n```\r\n\r\n,put it in `examples/anti_mlm_generation/`, and add a test that it runs with `sshleifer/tiny-gpt2`, for example?\r\n\r\nDoes that make sense to you @patrickvonplaten ?",
"Hey @AADeLucia - thanks for your PR! The `generate()` function is a very central part of the library and thus we have to be super careful when implementing new features. I agree with @sshleifer that MMI-antiLM decoding probably fits better in an example (has its own `generate()` function in an example file) to begin with - would that be ok for you?",
"> Hey @AADeLucia - thanks for your PR! The `generate()` function is a very central part of the library and thus we have to be super careful when implementing new features. I agree with @sshleifer that MMI-antiLM decoding probably fits better in an example (has its own `generate()` function in an example file) to begin with - would that be ok for you?\r\n\r\nThank you so much for your quick responses! Yes, putting it as its own example is okay with me.",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,603 | 1,614 | 1,614 | NONE | null | # What does this PR do?
Implements **Maximum Mutual Information anti-Language Model objective (MMI anti-LM)** decoding from [A Diversity-Promoting Objective Function for Neural Conversation Models](https://www.aclweb.org/anthology/N16-1014.pdf).
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7931/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7931/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7931",
"html_url": "https://github.com/huggingface/transformers/pull/7931",
"diff_url": "https://github.com/huggingface/transformers/pull/7931.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7931.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/7930 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7930/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7930/comments | https://api.github.com/repos/huggingface/transformers/issues/7930/events | https://github.com/huggingface/transformers/pull/7930 | 725,748,820 | MDExOlB1bGxSZXF1ZXN0NTA2OTM1MzUx | 7,930 | update model cards of Illuin models | {
"login": "quentinheinrich",
"id": 58738145,
"node_id": "MDQ6VXNlcjU4NzM4MTQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/58738145?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/quentinheinrich",
"html_url": "https://github.com/quentinheinrich",
"followers_url": "https://api.github.com/users/quentinheinrich/followers",
"following_url": "https://api.github.com/users/quentinheinrich/following{/other_user}",
"gists_url": "https://api.github.com/users/quentinheinrich/gists{/gist_id}",
"starred_url": "https://api.github.com/users/quentinheinrich/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/quentinheinrich/subscriptions",
"organizations_url": "https://api.github.com/users/quentinheinrich/orgs",
"repos_url": "https://api.github.com/users/quentinheinrich/repos",
"events_url": "https://api.github.com/users/quentinheinrich/events{/privacy}",
"received_events_url": "https://api.github.com/users/quentinheinrich/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | # What does this PR do?
Updates models cards of Illuin uploaded models to provide various informations.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7930/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7930/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7930",
"html_url": "https://github.com/huggingface/transformers/pull/7930",
"diff_url": "https://github.com/huggingface/transformers/pull/7930.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7930.patch",
"merged_at": 1603281954000
} |
https://api.github.com/repos/huggingface/transformers/issues/7929 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7929/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7929/comments | https://api.github.com/repos/huggingface/transformers/issues/7929/events | https://github.com/huggingface/transformers/issues/7929 | 725,694,307 | MDU6SXNzdWU3MjU2OTQzMDc= | 7,929 | Reformer model does not work with padded sequences | {
"login": "FabianBell",
"id": 33394443,
"node_id": "MDQ6VXNlcjMzMzk0NDQz",
"avatar_url": "https://avatars.githubusercontent.com/u/33394443?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FabianBell",
"html_url": "https://github.com/FabianBell",
"followers_url": "https://api.github.com/users/FabianBell/followers",
"following_url": "https://api.github.com/users/FabianBell/following{/other_user}",
"gists_url": "https://api.github.com/users/FabianBell/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FabianBell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FabianBell/subscriptions",
"organizations_url": "https://api.github.com/users/FabianBell/orgs",
"repos_url": "https://api.github.com/users/FabianBell/repos",
"events_url": "https://api.github.com/users/FabianBell/events{/privacy}",
"received_events_url": "https://api.github.com/users/FabianBell/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"The only Reformer tokenizer we have actually doesn't have a PAD Token which is why this leads to problems. The PR attached below removes the PAD token. Before padding one should set \r\n\r\n```python\r\ntokenizer.pad_token = tokenizer.eos_token\r\n```\r\n\r\nSimilar to GPT2 this won't cause any problems thanks to causal masking, see: https://github.com/huggingface/transformers/issues/4122#issuecomment-713749343"
] | 1,603 | 1,603 | 1,603 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.4.0
- Platform: Linux
- Python version: 3.8.5
- PyTorch version (GPU?): 1.6.0 (No)
- Tensorflow version (GPU?): 2.3.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): Reformer
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name) CommonGen
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```python
from transformers import ReformerTokenizer, ReformerModel
tokenizer = ReformerTokenizer.from_pretrained('google/reformer-crime-and-punishment')
seq = tokenizer(['Hello this is a test.', 'This is a test as well'], padding=True, return_tensors='pt')
reformer = ReformerModel.from_pretrained('google/reformer-crime-and-punishment')
out = reformer(**seq)
```
```python
Traceback (most recent call last):
File "reformerbug.py", line 20, in <module>
out = reformer(**seq)
File "/home/fabian/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/fabian/.local/lib/python3.8/site-packages/transformers/modeling_reformer.py", line 2096, in forward
embedding_output = self.embeddings(
File "/home/fabian/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/fabian/.local/lib/python3.8/site-packages/transformers/modeling_reformer.py", line 252, in forward
inputs_embeds = self.word_embeddings(input_ids)
File "/home/fabian/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/fabian/.local/lib/python3.8/site-packages/torch/nn/modules/sparse.py", line 124, in forward
return F.embedding(
File "/home/fabian/.local/lib/python3.8/site-packages/torch/nn/functional.py", line 1814, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
IndexError: index out of range in self
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The model should properly calculate the forward pass given the encoded sequence.
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7929/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7929/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7928 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7928/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7928/comments | https://api.github.com/repos/huggingface/transformers/issues/7928/events | https://github.com/huggingface/transformers/pull/7928 | 725,675,948 | MDExOlB1bGxSZXF1ZXN0NTA2ODc0MDU0 | 7,928 | Respect the 119 line chars | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | MEMBER | null | Respect the 119 lien chars limit in the model summary | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7928/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7928/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7928",
"html_url": "https://github.com/huggingface/transformers/pull/7928",
"diff_url": "https://github.com/huggingface/transformers/pull/7928.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7928.patch",
"merged_at": 1603206168000
} |
https://api.github.com/repos/huggingface/transformers/issues/7927 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7927/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7927/comments | https://api.github.com/repos/huggingface/transformers/issues/7927/events | https://github.com/huggingface/transformers/pull/7927 | 725,626,141 | MDExOlB1bGxSZXF1ZXN0NTA2ODMyNjE0 | 7,927 | [ProphetNet] add model summary | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7927/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7927/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7927",
"html_url": "https://github.com/huggingface/transformers/pull/7927",
"diff_url": "https://github.com/huggingface/transformers/pull/7927.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7927.patch",
"merged_at": 1603203063000
} |
https://api.github.com/repos/huggingface/transformers/issues/7926 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7926/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7926/comments | https://api.github.com/repos/huggingface/transformers/issues/7926/events | https://github.com/huggingface/transformers/issues/7926 | 725,558,658 | MDU6SXNzdWU3MjU1NTg2NTg= | 7,926 | Validation loop gives OOM when finetuning T5 | {
"login": "laibamehnaz",
"id": 36405283,
"node_id": "MDQ6VXNlcjM2NDA1Mjgz",
"avatar_url": "https://avatars.githubusercontent.com/u/36405283?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/laibamehnaz",
"html_url": "https://github.com/laibamehnaz",
"followers_url": "https://api.github.com/users/laibamehnaz/followers",
"following_url": "https://api.github.com/users/laibamehnaz/following{/other_user}",
"gists_url": "https://api.github.com/users/laibamehnaz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/laibamehnaz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/laibamehnaz/subscriptions",
"organizations_url": "https://api.github.com/users/laibamehnaz/orgs",
"repos_url": "https://api.github.com/users/laibamehnaz/repos",
"events_url": "https://api.github.com/users/laibamehnaz/events{/privacy}",
"received_events_url": "https://api.github.com/users/laibamehnaz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Maybe @sshleifer or @patil-suraj can help here? :-) ",
"We can't help without:\r\n\r\n1) a command that used to work and now OOMs.\r\n2) Some notion of when it worked. (Ideally version numbers but a guess is fine.)\r\n3) current `transformers-cli env` and `pip freeze | grep torch` outputs.",
"Maybe colab allocated you something larger than 12GB K80 the last time you ran your command?",
"> We can't help without:\r\n> \r\n> 1. a command that used to work and now OOMs.\r\n> 2. Some notion of when it worked. (Ideally version numbers but a guess is fine.)\r\n> 3. current `transformers-cli env` and `pip freeze | grep torch` outputs.\r\n\r\nYeah, I understand what you're saying. Itrained T5 like a month back on colab. Also I wanted to know, if the OOM error comes sporadically, i.e., sometimes in the first epoch, sometimes in the second epoch, and it's always during the validation loop, what should I conclude from it? It is an error in my code, or it is just the lack of appropriate memory.",
"Unfortunately, it's not clear what to conclude. \r\nYou can force eval to use less memory by controlling `val_max_target_length`, `eval_max_gen_length`, `val_batch_size`, and `eval_num_beams`.\r\n",
"Alright. Thank you so much.",
"Alternatively, you can try to use `Seq2SeqTrainer`.\r\n`Trainer` has recently added `eval_accumulation_step` argument which offloads the `logits/predictions` to `cpu` every `eval_accumulation_steps` to avoid OOM, you can use this with `Seq2SeqTrainer` as well.",
"> Alternatively, you can try to use `Seq2SeqTrainer`.\r\n> `Trainer` has recently added `eval_accumulation_step` argument which offloads the `logits/predictions` to `cpu` every `eval_accumulation_steps` to avoid OOM, you can use this with `Seq2SeqTrainer` as well.\r\n\r\nThank you so much. Tried this and it worked well. :)",
"> > Alternatively, you can try to use `Seq2SeqTrainer`.\r\n> > `Trainer` has recently added `eval_accumulation_step` argument which offloads the `logits/predictions` to `cpu` every `eval_accumulation_steps` to avoid OOM, you can use this with `Seq2SeqTrainer` as well.\r\n> \r\n> Thank you so much. Tried this and it worked well. :)\r\n\r\nwould you share what `eval_accumulation_steps` you used? Thanks.",
"I am having the exact same issue, this is happening only during evaluation and not training. thanks",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,603 | 1,614 | 1,614 | NONE | null | While finetuning T5-base on summarization task, using `--sortish_sampler` it gives an OOM error starting from a particular index during the validation loop. After removing those indices and training again, I still get the OOM error, but in the second validation loop, whereas the validation loop worked well the first time in the 0th epoch. I have finetuned t5-base previously as well on the same dataset and the same environment, and it never gave this error.
I am using a batch size of 1 during training and evaluation both.
I am using Colab to finetune the model on a single GPU.
GPU specs:
GPU: Tesla K80
RAM: 12GB
This is the traceback:
Epoch 0: 91% 5644/6231 [25:04<02:36, 3.75it/s, loss=2.089, v_num=16]
Epoch 0: 91% 5645/6231 [25:06<02:36, 3.75it/s, loss=2.089, v_num=16]
Epoch 0: 91% 5646/6231 [25:07<02:36, 3.75it/s, loss=2.089, v_num=16]
Epoch 0: 91% 5647/6231 [25:08<02:35, 3.74it/s, loss=2.089, v_num=16]
Epoch 1: 84% 5247/6231 [18:08<03:24, 4.82it/s, loss=1.859, v_num=16]
Validating: 0it [00:00, ?it/s]
Epoch 1: 84% 5248/6231 [18:10<03:24, 4.81it/s, loss=1.859, v_num=16]
Epoch 1: 84% 5249/6231 [18:11<03:24, 4.81it/s, loss=1.859, v_num=16]Traceback (most recent call last):
File "finetune.py", line 441, in <module>
main(args)
File "finetune.py", line 416, in main
logger=logger,
File "/content/drive/My Drive/Colab Notebooks/transformers_20_Oct_summarization/transformers/examples/lightning_base.py", line 386, in generic_train
trainer.fit(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 440, in fit
results = self.accelerator_backend.train()
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/accelerators/gpu_accelerator.py", line 54, in train
results = self.train_or_test()
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/accelerators/accelerator.py", line 66, in train_or_test
results = self.trainer.train()
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 483, in train
self.train_loop.run_training_epoch()
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/training_loop.py", line 569, in run_training_epoch
self.trainer.run_evaluation(test_mode=False)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 568, in run_evaluation
output = self.evaluation_loop.evaluation_step(test_mode, batch, batch_idx, dataloader_idx)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py", line 171, in evaluation_step
output = self.trainer.accelerator_backend.validation_step(args)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/accelerators/gpu_accelerator.py", line 76, in validation_step
output = self.__validation_step(args)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/accelerators/gpu_accelerator.py", line 86, in __validation_step
output = self.trainer.model.validation_step(*args)
File "finetune.py", line 181, in validation_step
return self._generative_step(batch)
File "finetune.py", line 221, in _generative_step
max_length=self.eval_max_length,
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/grad_mode.py", line 15, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/generation_utils.py", line 489, in generate
model_kwargs=model_kwargs,
File "/usr/local/lib/python3.6/dist-packages/transformers/generation_utils.py", line 665, in _generate_beam_search
outputs = self(**model_inputs, return_dict=True) # (batch_size * num_beams, cur_len, vocab_size)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_t5.py", line 1212, in forward
return_dict=return_dict,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_t5.py", line 767, in forward
output_attentions=output_attentions,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_t5.py", line 556, in forward
output_attentions=output_attentions,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_t5.py", line 478, in forward
output_attentions=output_attentions,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_t5.py", line 374, in forward
q, k.transpose(3, 2)
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 14.73 GiB total capacity; 13.70 GiB already allocated; 13.88 MiB free; 13.72 GiB reserved in total by PyTorch)
Epoch 1: 84%|████████▍ | 5249/6231 [18:13<03:24, 4.80it/s, loss=1.859, v_num=16]
Any help is appreciated. :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7926/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7926/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7925 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7925/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7925/comments | https://api.github.com/repos/huggingface/transformers/issues/7925/events | https://github.com/huggingface/transformers/pull/7925 | 725,438,040 | MDExOlB1bGxSZXF1ZXN0NTA2NjcwODIw | 7,925 | # Add whole word mask support for lm fine-tune | {
"login": "wlhgtc",
"id": 16603773,
"node_id": "MDQ6VXNlcjE2NjAzNzcz",
"avatar_url": "https://avatars.githubusercontent.com/u/16603773?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wlhgtc",
"html_url": "https://github.com/wlhgtc",
"followers_url": "https://api.github.com/users/wlhgtc/followers",
"following_url": "https://api.github.com/users/wlhgtc/following{/other_user}",
"gists_url": "https://api.github.com/users/wlhgtc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wlhgtc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wlhgtc/subscriptions",
"organizations_url": "https://api.github.com/users/wlhgtc/orgs",
"repos_url": "https://api.github.com/users/wlhgtc/repos",
"events_url": "https://api.github.com/users/wlhgtc/events{/privacy}",
"received_events_url": "https://api.github.com/users/wlhgtc/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Seem all test passed( expect the format problem), @sgugger @stas00 Could you help me review these PR ? ",
"And I wonder which version of black you use in **check_code_quality**.\r\nI got errors as follows:\r\n```\r\nwould reformat /home/circleci/transformers/examples/language-modeling/run_language_modeling.py\r\nwould reformat /home/circleci/transformers/src/transformers/data/datasets/language_modeling.py\r\n\r\n```\r\nI reformat my code with black(19.10b0), and all files are left unchanged",
"> Thanks a lot for your PR!\r\n> \r\n> Before digging more in a detailed review, I have a general comment: I think this should be decoupled a bit more: you created a new class `LineByLineWithRefDataset`, and in the same vein, I think you should create a new `DataCollator` for the whole-world masking. This will make it clearer to read and easier to customize.\r\n> \r\n> It would also be super nice if you could document in the README how to use your example with a chinese reference file (do you pass the script you added? or use the script you added to generate a file?)\r\n\r\nFinish ~ @sgugger ",
"> And I wonder which version of black you use in **check_code_quality**.\r\n```\r\n$ grep black setup.py\r\nextras[\"quality\"] = [\"black >= 20.8b1\", \"isort >= 5.5.4\", \"flake8 >= 3.8.3\"]\r\n```",
"> > And I wonder which version of black you use in **check_code_quality**.\r\n> \r\n> ```\r\n> $ grep black setup.py\r\n> extras[\"quality\"] = [\"black >= 20.8b1\", \"isort >= 5.5.4\", \"flake8 >= 3.8.3\"]\r\n> ```\r\n\r\nthx!",
"Looking good to me except for the code quality. If you don't manage to fix it, I can force-push on your branch.",
"> Looking good to me except for the code quality. If you don't manage to fix it, I can force-push on your branch.\r\n\r\nOK, I tried to fix it but failed :(",
"Just made the necessary change. Note that this wasn't styling that caused the isse but the code quality in general. `make quality` was erroring and telling you to run `make fix-copies` (which I did).",
"> This mostly looks good to me except I don't fully understand why we need the reference file. What's `LTP`? Why do we need reference files? Can this be explained in the README?\r\n\r\nThanks for your question.\r\n**Q :** Why ref file ?\r\n**A :** Suppose we have a Chinese sentence like : `我喜欢你。` The original Chinese-BERT will tokenize it as `['我','喜','欢','你']` in char level.\r\nActually, `喜欢` is a whole word. For whole word mask proxy, We need res like `['我','喜','##欢','你']`.\r\nSo we need a ref file to tell model which pos of BERT original token should be added `##`.\r\n\r\n**Q :** Why LTP ?\r\n**A :** Cause the best known Chinese WWM BERT is [https://github.com/ymcui/Chinese-BERT-wwm](https://github.com/ymcui/Chinese-BERT-wwm). It works well on so many Chines Task like CLUE (Chinese GLUE).\r\nThey use LTP, so if we want to fine-tune their model, we need LTP.\r\n\r\n@LysandreJik hope this would help.\r\n",
"@wlhgtc ltp is not added to the requirements.txt under examples folder ",
"> @wlhgtc ltp is not added to the requirements.txt under examples folder\r\n\r\nThanks for your notice. I forgot add it to requirements.txt.\r\nBut this is an optional package only for Chinese LM Fine-tune(and could be replaced by others tokenizer), I haven't find a way to note that :(",
"> > @wlhgtc ltp is not added to the requirements.txt under examples folder\r\n> \r\n> Thanks for your notice. I forgot add it to requirements.txt.\r\n> But this is an optional package only for Chinese LM Fine-tune(and could be replaced by others tokenizer), I haven't find a way to note that :(\r\n\r\nThanks, I also just tried, ltp requires transformer==3.2. I have no idea why. so have to install ltp with on dependency. Very annoying. By the way, thanks for the excellent work. \r\n\r\none more bug looks like when doing eval, it is referring to the ref file for the training data. if I set train_data = test_data. It goes through fine. Did I do something wrong? I am trying to follow your process as close as I can \r\n```\r\nTraceback (most recent call last):\r\n File \"../run_language_modeling.py\", line 351, in <module>\r\n main()\r\n File \"../run_language_modeling.py\", line 279, in main\r\n if training_args.do_eval\r\n File \"../run_language_modeling.py\", line 174, in get_dataset\r\n return _dataset(args.eval_data_file)\r\n File \"../run_language_modeling.py\", line 160, in _dataset\r\n ref_path=args.chinese_ref_file,\r\n File \"/home/chengyu/anaconda3/envs/pytorch_transformer/lib/python3.7/site-packages/transformers/data/datasets/language_modeling.py\", line 139, in __init__\r\n assert len(data) == len(ref)\r\n```\r\n\r\n ",
"1. Yeah, the LTP version doesn't support the newest transformers. I do the same things as yours.\r\n2. For the error, it means that your dataset has different length with you ref file(cause we read it line by line, this would lead to mismatch). Seem I didn't add the param `eval_ref_file` to data_args, then it will read `train_ref_file`; then cause this error.\r\nI will fix it soon.\r\n"
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | This PR add support for **wwm** (whole word mask) proxy when fine-tune BERT like model.
And it can be divided into two part : English Model Support and Chinese Model Support
For English, it's simple. The original tokenizer res contains symbols like '##ing'.
I just use the same mask proxy in [data_collator.py](https://github.com/wlhgtc/transformers/blob/master/src/transformers/data/data_collator.py#L168) by [Google.](https://github.com/google-research/bert/blob/master/create_pretraining_data.py#L342)
For Chinese, it's hard. We need to rely on (word level) tokenizer, cause BERT is char level in Chinese.
So I do things as follow to get word level tokens:
1. add get info code in [chinese_ref.py](https://github.com/wlhgtc/transformers/blob/master/examples/language-modeling/chinese_ref.py#L79)
2. create a new dataset to keep ref info [language_model.py](https://github.com/wlhgtc/transformers/blob/master/src/transformers/data/datasets/language_modeling.py#L117)
3. create word level ref according to ref [data_collator.py](https://github.com/wlhgtc/transformers/blob/master/src/transformers/data/data_collator.py#L150)
Then, it's all same to English.
And I add two parameters (`wwm` and `chinese_ref_path` ) to run lm.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7925/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7925/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7925",
"html_url": "https://github.com/huggingface/transformers/pull/7925",
"diff_url": "https://github.com/huggingface/transformers/pull/7925.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7925.patch",
"merged_at": 1603372740000
} |
https://api.github.com/repos/huggingface/transformers/issues/7924 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7924/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7924/comments | https://api.github.com/repos/huggingface/transformers/issues/7924/events | https://github.com/huggingface/transformers/issues/7924 | 725,266,119 | MDU6SXNzdWU3MjUyNjYxMTk= | 7,924 | EncoderDecoderModel not working with DDP | {
"login": "ayubSubhaniya",
"id": 20911334,
"node_id": "MDQ6VXNlcjIwOTExMzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/20911334?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ayubSubhaniya",
"html_url": "https://github.com/ayubSubhaniya",
"followers_url": "https://api.github.com/users/ayubSubhaniya/followers",
"following_url": "https://api.github.com/users/ayubSubhaniya/following{/other_user}",
"gists_url": "https://api.github.com/users/ayubSubhaniya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ayubSubhaniya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ayubSubhaniya/subscriptions",
"organizations_url": "https://api.github.com/users/ayubSubhaniya/orgs",
"repos_url": "https://api.github.com/users/ayubSubhaniya/repos",
"events_url": "https://api.github.com/users/ayubSubhaniya/events{/privacy}",
"received_events_url": "https://api.github.com/users/ayubSubhaniya/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"@sgugger @patrickvonplaten Can someone please help me?",
"I tried running @patrickvonplaten `bert-bert` Encoder-Decoder summarization script using DDP but got same error. \r\nBelow is the script. Have modified script a bit to skip some download for fast experimentation.\r\n\r\n``` patrick_script.py\r\n#!/usr/bin/env python3\r\nimport os\r\nimport nlp\r\nimport logging\r\nfrom transformers import BertTokenizer, EncoderDecoderModel, Trainer, TrainingArguments\r\n\r\nlogging.basicConfig(level=logging.INFO)\r\n\r\nlocal_rank = int(os.environ.get('LOCAL_RANK', -1))\r\nprint(\"local rank\", local_rank)\r\n\r\nmodel = EncoderDecoderModel.from_encoder_decoder_pretrained(\"bert-base-uncased\", \"bert-base-uncased\")\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-uncased\")\r\n\r\n# CLS token will work as BOS token\r\ntokenizer.bos_token = tokenizer.cls_token\r\n\r\n# SEP token will work as EOS token\r\ntokenizer.eos_token = tokenizer.sep_token\r\n\r\n# load train and validation data\r\n# train_dataset = nlp.load_dataset(\"cnn_dailymail\", \"3.0.0\", split=\"train\")\r\ntrain_dataset = None\r\nval_dataset = nlp.load_dataset(\"cnn_dailymail\", \"3.0.0\", split=\"validation[:1%]\")\r\n\r\n# # load rouge for validation\r\n# rouge = nlp.load_metric(\"rouge\")\r\n\r\n\r\n# set decoding params\r\nmodel.config.decoder_start_token_id = tokenizer.bos_token_id\r\nmodel.config.eos_token_id = tokenizer.eos_token_id\r\nmodel.config.max_length = 142\r\nmodel.config.min_length = 56\r\nmodel.config.no_repeat_ngram_size = 3\r\nmodel.early_stopping = True\r\nmodel.length_penalty = 2.0\r\nmodel.num_beams = 4\r\n\r\n\r\n# map data correctly\r\ndef map_to_encoder_decoder_inputs(batch):\r\n # Tokenizer will automatically set [BOS] <text> [EOS]\r\n # cut off at BERT max length 512\r\n inputs = tokenizer(batch[\"article\"], padding=\"max_length\", truncation=True, max_length=512)\r\n # force summarization <= 128\r\n outputs = tokenizer(batch[\"highlights\"], padding=\"max_length\", truncation=True, max_length=128)\r\n\r\n batch[\"input_ids\"] = inputs.input_ids\r\n batch[\"attention_mask\"] = inputs.attention_mask\r\n\r\n batch[\"decoder_input_ids\"] = outputs.input_ids\r\n batch[\"labels\"] = outputs.input_ids.copy()\r\n # mask loss for padding\r\n batch[\"labels\"] = [\r\n [-100 if token == tokenizer.pad_token_id else token for token in labels] for labels in batch[\"labels\"]\r\n ]\r\n batch[\"decoder_attention_mask\"] = outputs.attention_mask\r\n\r\n assert all([len(x) == 512 for x in inputs.input_ids])\r\n assert all([len(x) == 128 for x in outputs.input_ids])\r\n\r\n return batch\r\n\r\n\r\n# def compute_metrics(pred):\r\n# labels_ids = pred.label_ids\r\n# pred_ids = pred.predictions\r\n#\r\n# # all unnecessary tokens are removed\r\n# pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True)\r\n# label_str = tokenizer.batch_decode(labels_ids, skip_special_tokens=True)\r\n#\r\n# rouge_output = rouge.compute(predictions=pred_str, references=label_str, rouge_types=[\"rouge2\"])[\"rouge2\"].mid\r\n#\r\n# return {\r\n# \"rouge2_precision\": round(rouge_output.precision, 4),\r\n# \"rouge2_recall\": round(rouge_output.recall, 4),\r\n# \"rouge2_fmeasure\": round(rouge_output.fmeasure, 4),\r\n# }\r\n\r\n\r\n# set batch size here\r\nbatch_size = 1\r\n\r\n# make train dataset ready\r\n# train_dataset = train_dataset.map(\r\n# map_to_encoder_decoder_inputs, batched=True, batch_size=batch_size, remove_columns=[\"article\", \"highlights\"],\r\n# )\r\n# train_dataset.set_format(\r\n# type=\"torch\", columns=[\"input_ids\", \"attention_mask\", \"decoder_input_ids\", \"decoder_attention_mask\", \"labels\"],\r\n# )\r\n\r\n# same for validation dataset\r\nval_dataset = val_dataset.map(\r\n map_to_encoder_decoder_inputs, batched=True, batch_size=batch_size, remove_columns=[\"article\", \"highlights\"],\r\n)\r\nval_dataset.set_format(\r\n type=\"torch\", columns=[\"input_ids\", \"attention_mask\", \"decoder_input_ids\", \"decoder_attention_mask\", \"labels\"],\r\n)\r\n\r\n# set training arguments - these params are not really tuned, feel free to change\r\ntraining_args = TrainingArguments(\r\n output_dir=\"./\",\r\n per_device_train_batch_size=batch_size,\r\n per_device_eval_batch_size=batch_size,\r\n evaluate_during_training=True,\r\n do_train=True,\r\n do_eval=True,\r\n logging_steps=1000,\r\n save_steps=1000,\r\n eval_steps=1000,\r\n overwrite_output_dir=True,\r\n warmup_steps=2000,\r\n save_total_limit=10,\r\n local_rank=local_rank\r\n)\r\n\r\n# instantiate trainer\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n # compute_metrics=compute_metrics,\r\n train_dataset=val_dataset,\r\n eval_dataset=train_dataset,\r\n)\r\n\r\n# start training\r\ntrainer.train()\r\n````\r\n\r\nran it using\r\n`python -m torch.distributed.launch --nproc_per_node ${GPUS_ALLOWED} --use_env patrick_script.py`\r\n\r\n\r\nerror stack trace\r\n```\r\nI1021 08:03:09.387925 140506964375360 arrow_dataset.py:905] Loading cached processed dataset at /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/d8c27f2d603e2864036d92b0ec379f081896f6c28605ffd2e194c42cd04d48d8/cache-34961a58ac716d5b0323e755fe4ab272.arrow\r\nI1021 08:03:09.390125 139812404635456 filelock.py:274] Lock 139808428511864 acquired on /root/.cache/huggingface/datasets/720d2e20d8dc6d98f21195a39cc934bb41dd0a40b57ea3d323661a7c5d70522c.4fe1f8a4d3f3c15617ba15dd2d93f559a09627c62d0b04e22f89a5131b7bffb9.py.lock\r\nI1021 08:03:09.390361 139812404635456 load.py:331] Found main folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py at /usr/local/lib/python3.6/dist-packages/nlp/datasets/cnn_dailymail\r\nI1021 08:03:09.390519 139812404635456 load.py:344] Found specific version folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py at /usr/local/lib/python3.6/dist-packages/nlp/datasets/cnn_dailymail/d8c27f2d603e2864036d92b0ec379f081896f6c28605ffd2e194c42cd04d48d8\r\nI1021 08:03:09.390662 139812404635456 load.py:357] Found script file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py to /usr/local/lib/python3.6/dist-packages/nlp/datasets/cnn_dailymail/d8c27f2d603e2864036d92b0ec379f081896f6c28605ffd2e194c42cd04d48d8/cnn_dailymail.py\r\nI1021 08:03:09.390966 139812404635456 load.py:371] Found dataset infos file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/dataset_infos.json to /usr/local/lib/python3.6/dist-packages/nlp/datasets/cnn_dailymail/d8c27f2d603e2864036d92b0ec379f081896f6c28605ffd2e194c42cd04d48d8/dataset_infos.json\r\nI1021 08:03:09.391108 139812404635456 load.py:382] Found metadata file for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py at /usr/local/lib/python3.6/dist-packages/nlp/datasets/cnn_dailymail/d8c27f2d603e2864036d92b0ec379f081896f6c28605ffd2e194c42cd04d48d8/cnn_dailymail.json\r\nI1021 08:03:09.391246 139812404635456 filelock.py:318] Lock 139808428511864 released on /root/.cache/huggingface/datasets/720d2e20d8dc6d98f21195a39cc934bb41dd0a40b57ea3d323661a7c5d70522c.4fe1f8a4d3f3c15617ba15dd2d93f559a09627c62d0b04e22f89a5131b7bffb9.py.lock\r\nI1021 08:03:09.394302 139812404635456 info.py:236] Loading Dataset Infos from /usr/local/lib/python3.6/dist-packages/nlp/datasets/cnn_dailymail/d8c27f2d603e2864036d92b0ec379f081896f6c28605ffd2e194c42cd04d48d8\r\nI1021 08:03:09.395207 140506964375360 arrow_dataset.py:563] Set __getitem__(key) output type to torch for ['input_ids', 'attention_mask', 'decoder_input_ids', 'decoder_attention_mask', 'labels'] columns (when key is int or slice) and don't output other (un-formated) columns.\r\n/usr/local/lib/python3.6/dist-packages/transformers/training_args.py:332: FutureWarning: The `evaluate_during_training` argument is deprecated in favor of `evaluation_strategy` (which has more options)\r\n FutureWarning,\r\nI1021 08:03:09.395819 139812404635456 builder.py:169] Overwrite dataset info from restored data version.\r\nI1021 08:03:09.396007 139812404635456 info.py:194] Loading Dataset info from /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/d8c27f2d603e2864036d92b0ec379f081896f6c28605ffd2e194c42cd04d48d8\r\nI1021 08:03:09.396629 139812404635456 builder.py:388] Reusing dataset cnn_dailymail (/root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/d8c27f2d603e2864036d92b0ec379f081896f6c28605ffd2e194c42cd04d48d8)\r\nI1021 08:03:09.396816 139812404635456 builder.py:590] Constructing Dataset for split validation[:1%], from /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/d8c27f2d603e2864036d92b0ec379f081896f6c28605ffd2e194c42cd04d48d8\r\nI1021 08:03:09.400055 139812404635456 info_utils.py:39] All the checksums matched successfully for post processing resources\r\nI1021 08:03:09.434219 139812404635456 arrow_dataset.py:905] Loading cached processed dataset at /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/d8c27f2d603e2864036d92b0ec379f081896f6c28605ffd2e194c42cd04d48d8/cache-34961a58ac716d5b0323e755fe4ab272.arrow\r\nI1021 08:03:09.441347 139812404635456 arrow_dataset.py:563] Set __getitem__(key) output type to torch for ['input_ids', 'attention_mask', 'decoder_input_ids', 'decoder_attention_mask', 'labels'] columns (when key is int or slice) and don't output other (un-formated) columns.\r\n/usr/local/lib/python3.6/dist-packages/transformers/training_args.py:332: FutureWarning: The `evaluate_during_training` argument is deprecated in favor of `evaluation_strategy` (which has more options)\r\n FutureWarning,\r\n/usr/local/lib/python3.6/dist-packages/nlp/utils/py_utils.py:191: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:141.)\r\n return function(data_struct)\r\nEpoch: 0%| | 0/3 [00:00<?, ?it/s/usr/local/lib/python3.6/dist-packages/nlp/utils/py_utils.py:191: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:141.)\r\n return function(data_struct)\r\n Traceback (most recent call last): | 1/67 [00:00<00:40, 1.63it/s]\r\n File \"patrick_script.py\", line 129, in <module>\r\n trainer.train()\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/trainer.py\", line 763, in train\r\n tr_loss += self.training_step(model, inputs)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/trainer.py\", line 1113, in training_step\r\nTraceback (most recent call last):\r\n File \"patrick_script.py\", line 129, in <module>\r\n loss = self.compute_loss(model, inputs)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/trainer.py\", line 1137, in compute_loss\r\n trainer.train()\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/trainer.py\", line 763, in train\r\n outputs = model(**inputs)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py\", line 722, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/distributed.py\", line 526, in forward\r\n tr_loss += self.training_step(model, inputs)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/trainer.py\", line 1113, in training_step\r\n self.reducer.prepare_for_backward(list(_find_tensors(output)))\r\nRuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`; (2) making sure all `forward` function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).\r\n loss = self.compute_loss(model, inputs)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/trainer.py\", line 1137, in compute_loss\r\n outputs = model(**inputs)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py\", line 722, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/distributed.py\", line 526, in forward\r\n self.reducer.prepare_for_backward(list(_find_tensors(output)))\r\nRuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`; (2) making sure all `forward` function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).\r\nEpoch: 0%| | 0/3 [00:00<?, ?it/s]\r\nIteration: 1%|█▊ | 1/67 [00:00<01:02, 1.06it/s]\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.6/runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"/usr/lib/python3.6/runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/distributed/launch.py\", line 261, in <module>\r\n main()\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/distributed/launch.py\", line 257, in main\r\n cmd=cmd)\r\nsubprocess.CalledProcessError: Command '['/usr/local/bin/python', '-u', 'patrick_script.py']' returned non-zero exit status 1.\r\n```\r\n\r\n@patrickvonplaten I will be happy to fix this issue if you can give me some lead as to what might be wrong in code.",
"The problem is, in the encoder forward method, BERT forward method returns a tuple `(encoder_hidden_state, pooler_output)` [link](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L847).\r\nNow in encoder-decoder model for decoding, we use only `encoder_hidden_state` [link](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_encoder_decoder.py#L406). \r\nThus will be no grad computed for the `pooler` layer and the code is breaking.\r\n\r\nSolution\r\npass `encoder_add_pooling_layer=False` in model intialisation.\r\n\r\nModel Architecture\r\n```\r\n(encoder): RobertaModel(\r\n (embeddings): RobertaEmbeddings(\r\n (word_embeddings): Embedding(50265, 768, padding_idx=1)\r\n (position_embeddings): Embedding(514, 768, padding_idx=1)\r\n (token_type_embeddings): Embedding(1, 768)\r\n (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n (encoder): RobertaEncoder(\r\n (layer): ModuleList(\r\n (0): RobertaLayer(\r\n (attention): RobertaAttention(\r\n (self): RobertaSelfAttention(\r\n (query): Linear(in_features=768, out_features=768, bias=True)\r\n (key): Linear(in_features=768, out_features=768, bias=True)\r\n (value): Linear(in_features=768, out_features=768, bias=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n (output): RobertaSelfOutput(\r\n (dense): Linear(in_features=768, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (intermediate): RobertaIntermediate(\r\n (dense): Linear(in_features=768, out_features=3072, bias=True)\r\n )\r\n (output): RobertaOutput(\r\n (dense): Linear(in_features=3072, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n )\r\n )\r\n (pooler): RobertaPooler(\r\n (dense): Linear(in_features=768, out_features=768, bias=True)\r\n (activation): Tanh()\r\n )\r\n )\r\n```",
"@patrickvonplaten It is interesting that BERT forward method returns both `encoder output` and `pooler output`. How should we handle this in `EncoderDecoderModel`. Should we change BERT implementation? or we may set `pooler` layer as unused parameter. Not sure how to do that.",
"Hey @ayubSubhaniya - yeah I see where the bug is coming from! \r\nYour reasoning is 100% correct, great catch! \r\n\r\nI think the best solution is to actually pass a `encoder_add_pooling_layer=False` variable at initialization so it looks like:\r\n\r\n```python\r\nfrom transformers import EncoderDecoderModel\r\nmodel = EncoderDecoderModel.from_encoder_decoder_pretrained(\"bert-base-uncased\", \"bert-base-uncased\", encoder_add_pooling_layer=False)\r\nprint(model.encoder.pooler) # should give `None`\r\n```",
"This is pretty hard to see though, so I think we should add an explicit warning/error message for this case. \r\n\r\nI think one thing we should do is add a warning statement to the `__init__` of all `BertModel`, `RobertaModel`, ... (all models have this pooling layer structure) that checks a) if the model is in parallel mode - think this can be done via `isinstance(m, nn.DataParallel)` and b) if `add_pooling_layer=True` => If both a) and b) are True => then display a warning `That if model is used within `EncoderDecoderModel` and if errors arrises with unused parameters, use `encoder_add_pooling_layer=False`. \r\n\r\nI think this is the best we can to do help the user. It would be amazing if you want to open a PR for this -> otherwise I'll add it to my ToDo List :-) ",
"Will create a PR for this thanks 🙂 ",
"```\r\nfrom transformers import EncoderDecoderModel\r\nmodel = EncoderDecoderModel.from_encoder_decoder_pretrained(\"bert-base-uncased\", \"bert-base-uncased\", encoder_add_pooling_layer=False)\r\nprint(model.encoder.pooler) # should give `None`\r\n```\r\nthis does print None but after \r\n`model.save_pretrained(\"model\")`\r\nthe error returns and `print(model.encoder.pooler)`\r\nprint \r\n```\r\nLongformerPooler(\r\n (dense): Linear(in_features=768, out_features=768, bias=True)\r\n (activation): Tanh()\r\n)\r\n```",
"@alexyalunin yes I too faced same issue, workaround for this is to set pooler None explicitly .\r\n```model.encoder.pooler = None```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,603 | 1,619 | 1,619 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 3.3.1
- Platform: Linux-3.10.0-1127.19.1.el7.x86_64-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.8
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Using Distributed
### Who can help
Trainer: @sgugger
EncoderDecoderModel: @patrickvonplaten
## Information
I am using a combination of `distilroberta-base` as encoder and `distilgpt2` as a decoder. Used `EncoderDecoderModel` for model initialisation and `Trainer` class for fine-tuning model. For launching distributed processes, used `torch.distributed.launch`.
Below is the model config.
```
{
"add_cross_attention": false,
"architectures": null,
"bad_words_ids": null,
"bos_token_id": null,
"chunk_size_feed_forward": 0,
"decoder": {
"_num_labels": 1,
"activation_function": "gelu_new",
"add_cross_attention": true,
"architectures": [
"GPT2LMHeadModel"
],
"attn_pdrop": 0.1,
"bad_words_ids": null,
"bos_token_id": 50256,
"chunk_size_feed_forward": 0,
"decoder_start_token_id": null,
"do_sample": false,
"early_stopping": false,
"embd_pdrop": 0.1,
"eos_token_id": 50256,
"finetuning_task": null,
"gradient_checkpointing": false,
"id2label": {
"0": "LABEL_0"
},
"initializer_range": 0.02,
"is_decoder": true,
"is_encoder_decoder": false,
"label2id": {
"LABEL_0": 0
},
"layer_norm_epsilon": 1e-05,
"length_penalty": 1.0,
"max_length": 20,
"min_length": 0,
"model_type": "gpt2",
"n_ctx": 1024,
"n_embd": 768,
"n_head": 12,
"n_inner": null,
"n_layer": 6,
"n_positions": 1024,
"no_repeat_ngram_size": 0,
"num_beams": 1,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"pad_token_id": null,
"prefix": null,
"pruned_heads": {},
"repetition_penalty": 1.0,
"resid_pdrop": 0.1,
"return_dict": false,
"sep_token_id": null,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": true,
"summary_type": "cls_index",
"summary_use_proj": true,
"task_specific_params": {
"text-generation": {
"do_sample": true,
"max_length": 50
}
},
"temperature": 1.0,
"tie_encoder_decoder": false,
"tie_word_embeddings": true,
"tokenizer_class": null,
"top_k": 50,
"top_p": 1.0,
"torchscript": false,
"use_bfloat16": false,
"use_cache": true,
"vocab_size": 50257,
"xla_device": null
},
"decoder_start_token_id": null,
"do_sample": false,
"early_stopping": false,
"encoder": {
"add_cross_attention": false,
"architectures": [
"RobertaForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bad_words_ids": null,
"bos_token_id": 0,
"chunk_size_feed_forward": 0,
"decoder_start_token_id": null,
"do_sample": false,
"early_stopping": false,
"eos_token_id": 2,
"finetuning_task": null,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_range": 0.02,
"intermediate_size": 3072,
"is_decoder": false,
"is_encoder_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_eps": 1e-05,
"length_penalty": 1.0,
"max_length": 20,
"max_position_embeddings": 514,
"min_length": 0,
"model_type": "roberta",
"no_repeat_ngram_size": 0,
"num_attention_heads": 12,
"num_beams": 1,
"num_hidden_layers": 1,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"pad_token_id": 1,
"prefix": null,
"pruned_heads": {},
"repetition_penalty": 1.0,
"return_dict": false,
"sep_token_id": null,
"task_specific_params": null,
"temperature": 1.0,
"tie_encoder_decoder": false,
"tie_word_embeddings": true,
"tokenizer_class": null,
"top_k": 50,
"top_p": 1.0,
"torchscript": false,
"type_vocab_size": 1,
"use_bfloat16": false,
"use_cache": true,
"vocab_size": 50265,
"xla_device": null
},
"eos_token_id": null,
"finetuning_task": null,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"is_decoder": false,
"is_encoder_decoder": true,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"length_penalty": 1.0,
"max_length": 20,
"min_length": 0,
"model_type": "encoder_decoder",
"no_repeat_ngram_size": 0,
"num_beams": 1,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"pad_token_id": null,
"prefix": null,
"pruned_heads": {},
"repetition_penalty": 1.0,
"return_dict": false,
"sep_token_id": null,
"task_specific_params": null,
"temperature": 1.0,
"tie_encoder_decoder": false,
"tie_word_embeddings": true,
"tokenizer_class": null,
"top_k": 50,
"top_p": 1.0,
"torchscript": false,
"use_bfloat16": false,
"use_cache": true,
"xla_device": null
}
```
Got this error when using `Trainer` to fine-tune this model.
```
Traceback (most recent call last):
File "training/run_training.py", line 200, in <module>
raise e
File "training/run_training.py", line 197, in <module>
main()
File "training/run_training.py", line 163, in main
trainer.train(model_path=model_path)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 768, in train
tr_loss += self.training_step(model, inputs)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1116, in training_step
loss = self.compute_loss(model, inputs)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1142, in compute_loss
outputs = model(**inputs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/distributed.py", line 526, in forward
self.reducer.prepare_for_backward(list(_find_tensors(output)))
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`; (2) making sure all `forward` function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).
```
I checked this variable is `find_unused_parameters` set as true. Also above training works well in a single GPU setting. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7924/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7924/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7923 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7923/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7923/comments | https://api.github.com/repos/huggingface/transformers/issues/7923/events | https://github.com/huggingface/transformers/issues/7923 | 725,236,623 | MDU6SXNzdWU3MjUyMzY2MjM= | 7,923 | Loading a pytorch quantized model | {
"login": "amanpreet692",
"id": 42522643,
"node_id": "MDQ6VXNlcjQyNTIyNjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/42522643?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amanpreet692",
"html_url": "https://github.com/amanpreet692",
"followers_url": "https://api.github.com/users/amanpreet692/followers",
"following_url": "https://api.github.com/users/amanpreet692/following{/other_user}",
"gists_url": "https://api.github.com/users/amanpreet692/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amanpreet692/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amanpreet692/subscriptions",
"organizations_url": "https://api.github.com/users/amanpreet692/orgs",
"repos_url": "https://api.github.com/users/amanpreet692/repos",
"events_url": "https://api.github.com/users/amanpreet692/events{/privacy}",
"received_events_url": "https://api.github.com/users/amanpreet692/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hello @amanpreet692 \r\nAre you making sure you are reloading the saved (quantized) weights into a quantized model?\r\nA dense checkpoint and its quantized counterpart have two different state dicts and I suspect you are re-initializing random matrices because when loading into from_pretrained it doesn't find the right keys in the state dict.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,603 | 1,609 | 1,609 | CONTRIBUTOR | null | Hi, I quantized a pre-trained model via **torch.quantization.quantize_dynamic** and saved it using **save_pretrained**.
based on https://colab.research.google.com/github/pytorch/tutorials/blob/gh-pages/_downloads/dynamic_quantization_bert_tutorial.ipynb#scrollTo=foe-dVxHIgOC .
On reloading it using **from_pretrained** again the model blows up to its original size and the resultant predictions are garbage as well.
Is there a way to properly save the quantized weights and reload them?
Opening an issue because I couldn't find a resolution for the same.
@mfuntowicz @VictorSanh @patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7923/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7923/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7922 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7922/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7922/comments | https://api.github.com/repos/huggingface/transformers/issues/7922/events | https://github.com/huggingface/transformers/issues/7922 | 725,229,649 | MDU6SXNzdWU3MjUyMjk2NDk= | 7,922 | Is it possible to recommend the deployment method for implementing trained mode | {
"login": "aixuedegege",
"id": 19356707,
"node_id": "MDQ6VXNlcjE5MzU2NzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/19356707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aixuedegege",
"html_url": "https://github.com/aixuedegege",
"followers_url": "https://api.github.com/users/aixuedegege/followers",
"following_url": "https://api.github.com/users/aixuedegege/following{/other_user}",
"gists_url": "https://api.github.com/users/aixuedegege/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aixuedegege/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aixuedegege/subscriptions",
"organizations_url": "https://api.github.com/users/aixuedegege/orgs",
"repos_url": "https://api.github.com/users/aixuedegege/repos",
"events_url": "https://api.github.com/users/aixuedegege/events{/privacy}",
"received_events_url": "https://api.github.com/users/aixuedegege/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I find this https://pytorch.org/blog/model-serving-in-pyorch . It is very useful! "
] | 1,603 | 1,603 | 1,603 | NONE | null | # ❓ Questions & Help
I trained a model the Class Trainer. Could you recommend some high performance deployment proposal ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7922/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7922/timeline | completed | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.