url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/10935 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10935/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10935/comments | https://api.github.com/repos/huggingface/transformers/issues/10935/events | https://github.com/huggingface/transformers/issues/10935 | 842,656,498 | MDU6SXNzdWU4NDI2NTY0OTg= | 10,935 | Add DALL-E: Zero-Shot Text-to-Image Generation | {
"login": "slavakurilyak",
"id": 6625584,
"node_id": "MDQ6VXNlcjY2MjU1ODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/6625584?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/slavakurilyak",
"html_url": "https://github.com/slavakurilyak",
"followers_url": "https://api.github.com/users/slavakurilyak/followers",
"following_url": "https://api.github.com/users/slavakurilyak/following{/other_user}",
"gists_url": "https://api.github.com/users/slavakurilyak/gists{/gist_id}",
"starred_url": "https://api.github.com/users/slavakurilyak/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/slavakurilyak/subscriptions",
"organizations_url": "https://api.github.com/users/slavakurilyak/orgs",
"repos_url": "https://api.github.com/users/slavakurilyak/repos",
"events_url": "https://api.github.com/users/slavakurilyak/events{/privacy}",
"received_events_url": "https://api.github.com/users/slavakurilyak/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"+1",
"Does dall-e mini currently added to transformers?\r\nCurrently, it doesn't `eBart` is not recognized in transformers library.",
"cc @patil-suraj who's currently working on making it easier to use from transformers",
"There are dozens of DALL-E models currently listed on the Hugging Face site. Unless this is a specific variant/implementation that has yet to be added, it seems this issue can be closed."
] | 1,616 | 1,696 | null | NONE | null | # 🚀 Feature request
Please add DALLE-E model to huggingface's Transformers library.
1. [Announcement](https://openai.com/blog/dall-e/)
2. [Abstract](https://arxiv.org/abs/2102.12092v2)
3. [Paper](https://arxiv.org/pdf/2102.12092v2.pdf)
4. Code:
- [openai/DALL-E](https://github.com/openai/DALL-E) (official)
- [lucidrains/DALLE-pytorch](https://github.com/lucidrains/DALLE-pytorch) ([Colab](https://colab.research.google.com/drive/1dWvA54k4fH8zAmiix3VXbg95uEIMfqQM?usp=sharing))
## Motivation
> DALL·E is a 12-billion parameter version of [GPT-3](https://huggingface.co/transformers/model_doc/gpt.html) trained to generate images from text descriptions, using a dataset of text–image pairs
>
> We (Open AI) decided to name our model using a portmanteau of the artist Salvador Dalí and Pixar’s WALL·E.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10935/reactions",
"total_count": 9,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 7,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10935/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10934 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10934/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10934/comments | https://api.github.com/repos/huggingface/transformers/issues/10934/events | https://github.com/huggingface/transformers/pull/10934 | 842,621,511 | MDExOlB1bGxSZXF1ZXN0NjAyMTU0Njkx | 10,934 | Add `examples/multiple-choice/run_swag_no_trainer.py` | {
"login": "stancld",
"id": 46073029,
"node_id": "MDQ6VXNlcjQ2MDczMDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/46073029?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stancld",
"html_url": "https://github.com/stancld",
"followers_url": "https://api.github.com/users/stancld/followers",
"following_url": "https://api.github.com/users/stancld/following{/other_user}",
"gists_url": "https://api.github.com/users/stancld/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stancld/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stancld/subscriptions",
"organizations_url": "https://api.github.com/users/stancld/orgs",
"repos_url": "https://api.github.com/users/stancld/repos",
"events_url": "https://api.github.com/users/stancld/events{/privacy}",
"received_events_url": "https://api.github.com/users/stancld/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Tested on one GPU, two GPUs and TPUs, this runs fine everywhere. So just waiting for the small adjustments and it should be good to be merged :-) ",
"Thanks a lot!"
] | 1,616 | 1,617 | 1,617 | CONTRIBUTOR | null | This PR adds an example of a multiple-choice task on the SWAG dataset to show the functionalities of the new `accelerate` library.
<hr>
**Reviewers:** @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10934/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10934/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10934",
"html_url": "https://github.com/huggingface/transformers/pull/10934",
"diff_url": "https://github.com/huggingface/transformers/pull/10934.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10934.patch",
"merged_at": 1617050469000
} |
https://api.github.com/repos/huggingface/transformers/issues/10933 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10933/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10933/comments | https://api.github.com/repos/huggingface/transformers/issues/10933/events | https://github.com/huggingface/transformers/issues/10933 | 842,596,293 | MDU6SXNzdWU4NDI1OTYyOTM= | 10,933 | Can't download the facebook/bart-large-mnli tensorflow model | {
"login": "mayanb",
"id": 7052505,
"node_id": "MDQ6VXNlcjcwNTI1MDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7052505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mayanb",
"html_url": "https://github.com/mayanb",
"followers_url": "https://api.github.com/users/mayanb/followers",
"following_url": "https://api.github.com/users/mayanb/following{/other_user}",
"gists_url": "https://api.github.com/users/mayanb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mayanb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mayanb/subscriptions",
"organizations_url": "https://api.github.com/users/mayanb/orgs",
"repos_url": "https://api.github.com/users/mayanb/repos",
"events_url": "https://api.github.com/users/mayanb/events{/privacy}",
"received_events_url": "https://api.github.com/users/mayanb/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@mayanb I don't think this model has a TF version till now, check:- https://huggingface.co/facebook/bart-large-mnli/tree/main",
"@frankhart2018 ah you're right! thanks!"
] | 1,616 | 1,616 | 1,616 | NONE | null | Hello! When I try to create a pipeline with the model specified as "facebook/bart-large-mnli" I get a 404 Client Error: Not Found for url: https://huggingface.co/facebook/bart-large-mnli/resolve/main/tf_model.h5
And when I try to go directly to that url I do also notice it throws a 404 error. Any ideas on how to fix this would be greatly appreciated! Thanks!
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.4.2
- Python version: 3.8.6
- Tensorflow version: 2.2.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
Models:
blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
The code I tried running is:
```python:
from transformers import pipeline
classifier = pipeline('zero-shot-classification', model='facebook/bart-large-mnli')
```
The full error message is:
```
404 Client Error: Not Found for url: https://huggingface.co/facebook/bart-large-mnli/resolve/main/tf_model.h5
---------------------------------------------------------------------------
HTTPError Traceback (most recent call last)
~/Documents/github/econ2355/version2env/lib/python3.8/site-packages/transformers/modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
1219 # Load from URL or cache if already cached
-> 1220 resolved_archive_file = cached_path(
1221 archive_file,
~/Documents/github/econ2355/version2env/lib/python3.8/site-packages/transformers/file_utils.py in cached_path(url_or_filename, cache_dir, force_download, proxies, resume_download, user_agent, extract_compressed_file, force_extract, use_auth_token, local_files_only)
1133 # URL, so get it from the cache (downloading if necessary)
-> 1134 output_path = get_from_cache(
1135 url_or_filename,
~/Documents/github/econ2355/version2env/lib/python3.8/site-packages/transformers/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, use_auth_token, local_files_only)
1299 r = requests.head(url, headers=headers, allow_redirects=False, proxies=proxies, timeout=etag_timeout)
-> 1300 r.raise_for_status()
1301 etag = r.headers.get("X-Linked-Etag") or r.headers.get("ETag")
~/Documents/github/econ2355/version2env/lib/python3.8/site-packages/requests/models.py in raise_for_status(self)
942 if http_error_msg:
--> 943 raise HTTPError(http_error_msg, response=self)
944
HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/facebook/bart-large-mnli/resolve/main/tf_model.h5
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-2-7aad78410119> in <module>
14
15 from transformers import pipeline
---> 16 classifier = pipeline('zero-shot-classification',
17 model='facebook/bart-large-mnli')
18
~/Documents/github/econ2355/version2env/lib/python3.8/site-packages/transformers/pipelines/__init__.py in pipeline(task, model, config, tokenizer, framework, revision, use_fast, model_kwargs, **kwargs)
342 model = get_default_model(targeted_task, framework, task_options)
343
--> 344 framework = framework or get_framework(model)
345
346 task_class, model_class = targeted_task["impl"], targeted_task[framework]
~/Documents/github/econ2355/version2env/lib/python3.8/site-packages/transformers/pipelines/base.py in get_framework(model, revision)
66 model = AutoModel.from_pretrained(model, revision=revision)
67 elif is_tf_available() and not is_torch_available():
---> 68 model = TFAutoModel.from_pretrained(model, revision=revision)
69 else:
70 try:
~/Documents/github/econ2355/version2env/lib/python3.8/site-packages/transformers/models/auto/modeling_tf_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
616
617 if type(config) in TF_MODEL_MAPPING.keys():
--> 618 return TF_MODEL_MAPPING[type(config)].from_pretrained(
619 pretrained_model_name_or_path, *model_args, config=config, **kwargs
620 )
~/Documents/github/econ2355/version2env/lib/python3.8/site-packages/transformers/modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
1234 f"- or '{pretrained_model_name_or_path}' is the correct path to a directory containing a file named one of {TF2_WEIGHTS_NAME}, {WEIGHTS_NAME}.\n\n"
1235 )
-> 1236 raise EnvironmentError(msg)
1237 if resolved_archive_file == archive_file:
1238 logger.info("loading weights file {}".format(archive_file))
OSError: Can't load weights for 'facebook/bart-large-mnli'. Make sure that:
- 'facebook/bart-large-mnli' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'facebook/bart-large-mnli' is the correct path to a directory containing a file named one of tf_model.h5, pytorch_model.bin.
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10933/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10933/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10932 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10932/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10932/comments | https://api.github.com/repos/huggingface/transformers/issues/10932/events | https://github.com/huggingface/transformers/pull/10932 | 842,573,616 | MDExOlB1bGxSZXF1ZXN0NjAyMTE5ODM1 | 10,932 | Updated colab links in readme of examples | {
"login": "WybeKoper",
"id": 40920213,
"node_id": "MDQ6VXNlcjQwOTIwMjEz",
"avatar_url": "https://avatars.githubusercontent.com/u/40920213?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WybeKoper",
"html_url": "https://github.com/WybeKoper",
"followers_url": "https://api.github.com/users/WybeKoper/followers",
"following_url": "https://api.github.com/users/WybeKoper/following{/other_user}",
"gists_url": "https://api.github.com/users/WybeKoper/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WybeKoper/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WybeKoper/subscriptions",
"organizations_url": "https://api.github.com/users/WybeKoper/orgs",
"repos_url": "https://api.github.com/users/WybeKoper/repos",
"events_url": "https://api.github.com/users/WybeKoper/events{/privacy}",
"received_events_url": "https://api.github.com/users/WybeKoper/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,617 | 1,617 | CONTRIBUTOR | null | # What does this PR do?
Updated the Google Colab links for the The Big Table of Tasks in the examples folder readme.
Google Colab links were replace to the appropriate examples found [here](https://github.com/huggingface/notebooks/tree/master/examples)
text_generation.ipynb is not present in the [notebook repo](https://github.com/huggingface/notebooks/tree/master/examples). Will text_generation.ipynb be added in the future?
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10932/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10932/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10932",
"html_url": "https://github.com/huggingface/transformers/pull/10932",
"diff_url": "https://github.com/huggingface/transformers/pull/10932.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10932.patch",
"merged_at": 1617022029000
} |
https://api.github.com/repos/huggingface/transformers/issues/10931 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10931/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10931/comments | https://api.github.com/repos/huggingface/transformers/issues/10931/events | https://github.com/huggingface/transformers/issues/10931 | 842,547,673 | MDU6SXNzdWU4NDI1NDc2NzM= | 10,931 | Another way to express masked_index = torch.nonzero(input_ids == self.tokenizer.mask_token_id, as_tuple=False) | {
"login": "moh-yani",
"id": 55953151,
"node_id": "MDQ6VXNlcjU1OTUzMTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/55953151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/moh-yani",
"html_url": "https://github.com/moh-yani",
"followers_url": "https://api.github.com/users/moh-yani/followers",
"following_url": "https://api.github.com/users/moh-yani/following{/other_user}",
"gists_url": "https://api.github.com/users/moh-yani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/moh-yani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/moh-yani/subscriptions",
"organizations_url": "https://api.github.com/users/moh-yani/orgs",
"repos_url": "https://api.github.com/users/moh-yani/repos",
"events_url": "https://api.github.com/users/moh-yani/events{/privacy}",
"received_events_url": "https://api.github.com/users/moh-yani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! Unfortunately, recent transformers versions only work with torch 1.4.0+. The README is incorrect respective to that, and I'll update it in the coming days.",
"> Hello! Unfortunately, recent transformers versions only work with torch 1.4.0+. The README is incorrect respective to that, and I'll update it in the coming days.\r\n\r\nThank you for the response.\r\n\r\nOkay. Does it mean that there is no another way for expressing `masked_index = torch.nonzero(input_ids == self.tokenizer.mask_token_id, as_tuple=False)` in pytorch 1.1.0? Is it still possible to use `torch.where(...)`? if yes, how to express it with `torch.where()`?\r\n\r\nThis is because the available machine we used is in an old driver version of CUDA.\r\n\r\nSincerely,\r\nMY",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,616 | 1,620 | 1,620 | NONE | null | Dear,
We use PyTorch 1.1.0 to run a masked word completion with BERT. However, we found an error
`TypeError: nonzero() got an unexpected keyword argument 'as_tuple'`
The error refers to this:
`masked_index = torch.nonzero(input_ids == self.tokenizer.mask_token_id, as_tuple=False)`
Is there another way to express the syntax above with keep using PyTorch 1.1.0?
Best regards,
Mohammad YANI
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10931/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10931/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10930 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10930/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10930/comments | https://api.github.com/repos/huggingface/transformers/issues/10930/events | https://github.com/huggingface/transformers/issues/10930 | 842,542,195 | MDU6SXNzdWU4NDI1NDIxOTU= | 10,930 | Error while predicting on single sentence for token classification task | {
"login": "saurabhhssaurabh",
"id": 7511230,
"node_id": "MDQ6VXNlcjc1MTEyMzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7511230?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saurabhhssaurabh",
"html_url": "https://github.com/saurabhhssaurabh",
"followers_url": "https://api.github.com/users/saurabhhssaurabh/followers",
"following_url": "https://api.github.com/users/saurabhhssaurabh/following{/other_user}",
"gists_url": "https://api.github.com/users/saurabhhssaurabh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saurabhhssaurabh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saurabhhssaurabh/subscriptions",
"organizations_url": "https://api.github.com/users/saurabhhssaurabh/orgs",
"repos_url": "https://api.github.com/users/saurabhhssaurabh/repos",
"events_url": "https://api.github.com/users/saurabhhssaurabh/events{/privacy}",
"received_events_url": "https://api.github.com/users/saurabhhssaurabh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@saurabhhssaurabh `trainer.predict()` expects an instance of `torch.utils.data.Dataset` to be passed and not a single sentence. I think it will be easier to use the trained model at self.model to predict rather than trying to use trainer object, as it does not have any single prediction method yet.",
"@frankhart2018 \r\nThank you for replying. I will implement code using prediction from self.model."
] | 1,616 | 1,616 | 1,616 | NONE | null | Hi
I have fine-tuned BERT for NER task. I am predicting on the fine-tuned model as:
`output = self.tokenizer(text)
trainer = Trainer(model=self.model, tokenizer=self.tokenizer)
trainer.predict(output)`
This code snippet is throwing the following error:
File "run_ner_test_3.py", line 486, in <module>
obj.predict(text="i require to send 9330793.33 by account")
File "run_ner_test_3.py", line 430, in predict
trainer.predict(output)
File "/home/dev01/python_3/lib/python3.6/site-packages/transformers/trainer.py", line 1596, in predict
test_dataloader, description="Prediction", ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix
File "/home/dev01/python_3/lib/python3.6/site-packages/transformers/trainer.py", line 1658, in prediction_loop
for step, inputs in enumerate(dataloader):
File "/home/dev01/python_3/lib64/python3.6/site-packages/torch/utils/data/dataloader.py", line 435, in __next__
data = self._next_data()
File "/home/dev01/python_3/lib64/python3.6/site-packages/torch/utils/data/dataloader.py", line 475, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/home/dev01/python_3/lib64/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/dev01/python_3/lib64/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/dev01/python_3/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 305, in __getitem__
return self._encodings[item]
IndexError: list index out of range
Can you please suggest how to predict on a single sentence? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10930/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10930/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10929 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10929/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10929/comments | https://api.github.com/repos/huggingface/transformers/issues/10929/events | https://github.com/huggingface/transformers/issues/10929 | 842,511,400 | MDU6SXNzdWU4NDI1MTE0MDA= | 10,929 | Training with DeepSpeed takes more GPU memory than without DeepSpeed | {
"login": "oriyor",
"id": 39461788,
"node_id": "MDQ6VXNlcjM5NDYxNzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/39461788?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oriyor",
"html_url": "https://github.com/oriyor",
"followers_url": "https://api.github.com/users/oriyor/followers",
"following_url": "https://api.github.com/users/oriyor/following{/other_user}",
"gists_url": "https://api.github.com/users/oriyor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oriyor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oriyor/subscriptions",
"organizations_url": "https://api.github.com/users/oriyor/orgs",
"repos_url": "https://api.github.com/users/oriyor/repos",
"events_url": "https://api.github.com/users/oriyor/events{/privacy}",
"received_events_url": "https://api.github.com/users/oriyor/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2659267025,
"node_id": "MDU6TGFiZWwyNjU5MjY3MDI1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed",
"name": "DeepSpeed",
"color": "4D34F7",
"default": false,
"description": ""
}
] | closed | false | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] | [
"Also adding the logs from the beginning of training with deepspeed:\r\n\r\ndeepspeed examples/seq2seq/run_summarization.py --model_name_or_path t5-small --do_train --do_eval --dataset_name cnn_dailymail --dataset_config \"3.0.0\" --source_prefix \"summarize: \" --output_dir /tmp/tst-summarization --per_device_train_batch_size=4 --per_device_eval_batch_size=4 --overwrite_output_dir --predict_with_generate --deepspeed examples/tests/deepspeed/ds_config.json\r\n[2021-03-27 17:02:34,357] [WARNING] [runner.py:117:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only.\r\n[2021-03-27 17:02:34,381] [INFO] [runner.py:358:main] cmd = /media/disk1/oriyor/hf_venv_3.6/bin/python -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMF19 --master_addr=127.0.0.1 --master_port=29500 examples/seq2seq/run_summarization.py --model_name_or_path t5-small --do_train --do_eval --dataset_name cnn_dailymail --dataset_config 3.0.0 --source_prefix summarize: --output_dir /tmp/tst-summarization --per_device_train_batch_size=4 --per_device_eval_batch_size=4 --overwrite_output_dir --predict_with_generate --deepspeed examples/tests/deepspeed/ds_config.json\r\n[2021-03-27 17:02:34,981] [INFO] [launch.py:80:main] WORLD INFO DICT: {'localhost': [0]}\r\n[2021-03-27 17:02:34,981] [INFO] [launch.py:89:main] nnodes=1, num_local_procs=1, node_rank=0\r\n[2021-03-27 17:02:34,981] [INFO] [launch.py:101:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0]})\r\n[2021-03-27 17:02:34,981] [INFO] [launch.py:102:main] dist_world_size=1\r\n[2021-03-27 17:02:34,981] [INFO] [launch.py:105:main] Setting CUDA_VISIBLE_DEVICES=0\r\n[2021-03-27 17:02:36,820] [INFO] [distributed.py:47:init_distributed] Initializing torch distributed with backend: nccl\r\nWARNING:__main__:Process rank: 0, device: cuda:0, n_gpu: 1distributed training: True, 16-bits training: False\r\nINFO:__main__:Training/evaluation parameters Seq2SeqTrainingArguments(output_dir='/tmp/tst-summarization', overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=False, evaluation_strategy=<IntervalStrategy.NO: 'no'>, prediction_loss_only=False, per_device_train_batch_size=4, per_device_eval_batch_size=4, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=3.0, max_steps=-1, lr_scheduler_type=<SchedulerType.LINEAR: 'linear'>, warmup_ratio=0.0, warmup_steps=0, logging_dir='runs/Mar27_17-02-36_rack-jonathan-g04', logging_strategy=<IntervalStrategy.STEPS: 'steps'>, logging_first_step=False, logging_steps=500, save_strategy=<IntervalStrategy.STEPS: 'steps'>, save_steps=500, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', fp16_backend='auto', fp16_full_eval=False, local_rank=0, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name='/tmp/tst-summarization', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, sharded_ddp=[], deepspeed='examples/tests/deepspeed/ds_config.json', label_smoothing_factor=0.0, adafactor=False, group_by_length=False, report_to=['tensorboard'], ddp_find_unused_parameters=None, dataloader_pin_memory=True, skip_memory_metrics=False, sortish_sampler=False, predict_with_generate=True)\r\nWARNING:datasets.builder:Reusing dataset cnn_dailymail (/home/oriy/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0a01b1abede4f646130574f203de57a293ded8a7a11e3406a539453afdfeb2c0)\r\nloading configuration file https://huggingface.co/t5-small/resolve/main/config.json from cache at /home/oriy/.cache/huggingface/transformers/fe501e8fd6425b8ec93df37767fcce78ce626e34cc5edc859c662350cf712e41.406701565c0afd9899544c1cb8b93185a76f00b31e5ce7f6e18bbaef02241985\r\nModel config T5Config {\r\n \"architectures\": [\r\n \"T5WithLMHeadModel\"\r\n ],\r\n \"d_ff\": 2048,\r\n \"d_kv\": 64,\r\n \"d_model\": 512,\r\n \"decoder_start_token_id\": 0,\r\n \"dropout_rate\": 0.1,\r\n \"eos_token_id\": 1,\r\n \"feed_forward_proj\": \"relu\",\r\n \"initializer_factor\": 1.0,\r\n \"is_encoder_decoder\": true,\r\n \"layer_norm_epsilon\": 1e-06,\r\n \"model_type\": \"t5\",\r\n \"n_positions\": 512,\r\n \"num_decoder_layers\": 6,\r\n \"num_heads\": 8,\r\n \"num_layers\": 6,\r\n \"output_past\": true,\r\n \"pad_token_id\": 0,\r\n \"relative_attention_num_buckets\": 32,\r\n \"task_specific_params\": {\r\n \"summarization\": {\r\n \"early_stopping\": true,\r\n \"length_penalty\": 2.0,\r\n \"max_length\": 200,\r\n \"min_length\": 30,\r\n \"no_repeat_ngram_size\": 3,\r\n \"num_beams\": 4,\r\n \"prefix\": \"summarize: \"\r\n },\r\n \"translation_en_to_de\": {\r\n \"early_stopping\": true,\r\n \"max_length\": 300,\r\n \"num_beams\": 4,\r\n \"prefix\": \"translate English to German: \"\r\n },\r\n \"translation_en_to_fr\": {\r\n \"early_stopping\": true,\r\n \"max_length\": 300,\r\n \"num_beams\": 4,\r\n \"prefix\": \"translate English to French: \"\r\n },\r\n \"translation_en_to_ro\": {\r\n \"early_stopping\": true,\r\n \"max_length\": 300,\r\n \"num_beams\": 4,\r\n \"prefix\": \"translate English to Romanian: \"\r\n }\r\n },\r\n \"transformers_version\": \"4.5.0.dev0\",\r\n \"use_cache\": true,\r\n \"vocab_size\": 32128\r\n}\r\n\r\nloading configuration file https://huggingface.co/t5-small/resolve/main/config.json from cache at /home/oriy/.cache/huggingface/transformers/fe501e8fd6425b8ec93df37767fcce78ce626e34cc5edc859c662350cf712e41.406701565c0afd9899544c1cb8b93185a76f00b31e5ce7f6e18bbaef02241985\r\nModel config T5Config {\r\n \"architectures\": [\r\n \"T5WithLMHeadModel\"\r\n ],\r\n \"d_ff\": 2048,\r\n \"d_kv\": 64,\r\n \"d_model\": 512,\r\n \"decoder_start_token_id\": 0,\r\n \"dropout_rate\": 0.1,\r\n \"eos_token_id\": 1,\r\n \"feed_forward_proj\": \"relu\",\r\n \"initializer_factor\": 1.0,\r\n \"is_encoder_decoder\": true,\r\n \"layer_norm_epsilon\": 1e-06,\r\n \"model_type\": \"t5\",\r\n \"n_positions\": 512,\r\n \"num_decoder_layers\": 6,\r\n \"num_heads\": 8,\r\n \"num_layers\": 6,\r\n \"output_past\": true,\r\n \"pad_token_id\": 0,\r\n \"relative_attention_num_buckets\": 32,\r\n \"task_specific_params\": {\r\n \"summarization\": {\r\n \"early_stopping\": true,\r\n \"length_penalty\": 2.0,\r\n \"max_length\": 200,\r\n \"min_length\": 30,\r\n \"no_repeat_ngram_size\": 3,\r\n \"num_beams\": 4,\r\n \"prefix\": \"summarize: \"\r\n },\r\n \"translation_en_to_de\": {\r\n \"early_stopping\": true,\r\n \"max_length\": 300,\r\n \"num_beams\": 4,\r\n \"prefix\": \"translate English to German: \"\r\n },\r\n \"translation_en_to_fr\": {\r\n \"early_stopping\": true,\r\n \"max_length\": 300,\r\n \"num_beams\": 4,\r\n \"prefix\": \"translate English to French: \"\r\n },\r\n \"translation_en_to_ro\": {\r\n \"early_stopping\": true,\r\n \"max_length\": 300,\r\n \"num_beams\": 4,\r\n \"prefix\": \"translate English to Romanian: \"\r\n }\r\n },\r\n \"transformers_version\": \"4.5.0.dev0\",\r\n \"use_cache\": true,\r\n \"vocab_size\": 32128\r\n}\r\n\r\nloading file https://huggingface.co/t5-small/resolve/main/spiece.model from cache at /home/oriy/.cache/huggingface/transformers/65fc04e21f45f61430aea0c4fedffac16a4d20d78b8e6601d8d996ebefefecd2.3b69006860e7b5d0a63ffdddc01ddcd6b7c318a6f4fd793596552c741734c62d\r\nloading file https://huggingface.co/t5-small/resolve/main/tokenizer.json from cache at /home/oriy/.cache/huggingface/transformers/06779097c78e12f47ef67ecb728810c2ae757ee0a9efe9390c6419783d99382d.8627f1bd5d270a9fd2e5a51c8bec3223896587cc3cfe13edeabb0992ab43c529\r\nloading file https://huggingface.co/t5-small/resolve/main/added_tokens.json from cache at None\r\nloading file https://huggingface.co/t5-small/resolve/main/special_tokens_map.json from cache at None\r\nloading file https://huggingface.co/t5-small/resolve/main/tokenizer_config.json from cache at None\r\nloading weights file https://huggingface.co/t5-small/resolve/main/pytorch_model.bin from cache at /home/oriy/.cache/huggingface/transformers/fee5a3a0ae379232608b6eed45d2d7a0d2966b9683728838412caccc41b4b0ed.ddacdc89ec88482db20c676f0861a336f3d0409f94748c209847b49529d73885\r\nAll model checkpoint weights were used when initializing T5ForConditionalGeneration.\r\n\r\nAll the weights of T5ForConditionalGeneration were initialized from the model checkpoint at t5-small.\r\nIf your task is similar to the task the model of the checkpoint was trained on, you can already use T5ForConditionalGeneration for predictions without further training.\r\nWARNING:datasets.arrow_dataset:Loading cached processed dataset at /home/oriy/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0a01b1abede4f646130574f203de57a293ded8a7a11e3406a539453afdfeb2c0/cache-3c2d8ad9af1d1a3e.arrow\r\nWARNING:datasets.arrow_dataset:Loading cached processed dataset at /home/oriy/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0a01b1abede4f646130574f203de57a293ded8a7a11e3406a539453afdfeb2c0/cache-2e7e82c8de410d07.arrow\r\nUpdating the `scheduler` config from examples/tests/deepspeed/ds_config.json with other command line arguments\r\nsetting optimizer.params.lr to 5e-05\r\nsetting optimizer.params.betas to [0.9, 0.999]\r\nsetting optimizer.params.eps to 1e-08\r\nsetting optimizer.params.weight_decay to 0.0\r\nUpdating the `scheduler` config from examples/tests/deepspeed/ds_config.json with other command line arguments\r\nsetting scheduler.params.warmup_max_lr to 5e-05\r\nsetting scheduler.params.warmup_num_steps to 0\r\n[2021-03-27 17:02:46,871] [INFO] [logging.py:60:log_dist] [Rank 0] DeepSpeed info: version=0.3.13, git-hash=unknown, git-branch=unknown\r\n[2021-03-27 17:02:48,970] [INFO] [engine.py:77:_initialize_parameter_parallel_groups] data_parallel_size: 1, parameter_parallel_size: 1\r\nhuggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\r\nTo disable this warning, you can either:\r\n\t- Avoid using `tokenizers` before the fork if possible\r\n\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\r\nhuggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\r\nTo disable this warning, you can either:\r\n\t- Avoid using `tokenizers` before the fork if possible\r\n\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\r\nhuggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\r\nTo disable this warning, you can either:\r\n\t- Avoid using `tokenizers` before the fork if possible\r\n\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\r\nhuggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\r\nTo disable this warning, you can either:\r\n\t- Avoid using `tokenizers` before the fork if possible\r\n\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\r\nhuggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\r\nTo disable this warning, you can either:\r\n\t- Avoid using `tokenizers` before the fork if possible\r\n\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\r\nUsing /home/oriy/.cache/torch_extensions as PyTorch extensions root...\r\nhuggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\r\nTo disable this warning, you can either:\r\n\t- Avoid using `tokenizers` before the fork if possible\r\n\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\r\nhuggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\r\nTo disable this warning, you can either:\r\n\t- Avoid using `tokenizers` before the fork if possible\r\n\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\r\nhuggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\r\nTo disable this warning, you can either:\r\n\t- Avoid using `tokenizers` before the fork if possible\r\n\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\r\nDetected CUDA files, patching ldflags\r\nEmitting ninja build file /home/oriy/.cache/torch_extensions/cpu_adam/build.ninja...\r\nBuilding extension module cpu_adam...\r\nAllowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)\r\nhuggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\r\nTo disable this warning, you can either:\r\n\t- Avoid using `tokenizers` before the fork if possible\r\n\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\r\nninja: no work to do.\r\nLoading extension module cpu_adam...\r\nTime to load cpu_adam op: 0.43370747566223145 seconds\r\nAdam Optimizer #0 is created with AVX2 arithmetic capability.\r\nConfig: alpha=0.000050, betas=(0.900000, 0.999000), weight_decay=0.000000, adam_w=1\r\n[2021-03-27 17:02:52,144] [INFO] [engine.py:602:_configure_optimizer] Using DeepSpeed Optimizer param name adam as basic optimizer\r\n[2021-03-27 17:02:52,145] [INFO] [engine.py:606:_configure_optimizer] DeepSpeed Basic Optimizer = DeepSpeedCPUAdam\r\nChecking ZeRO support for optimizer=DeepSpeedCPUAdam type=<class 'deepspeed.ops.adam.cpu_adam.DeepSpeedCPUAdam'>\r\n[2021-03-27 17:02:52,145] [INFO] [logging.py:60:log_dist] [Rank 0] Creating fp16 ZeRO stage 2 optimizer\r\nUsing /home/oriy/.cache/torch_extensions as PyTorch extensions root...\r\nhuggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\r\nTo disable this warning, you can either:\r\n\t- Avoid using `tokenizers` before the fork if possible\r\n\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\r\nhuggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\r\nTo disable this warning, you can either:\r\n\t- Avoid using `tokenizers` before the fork if possible\r\n\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\r\nhuggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\r\nTo disable this warning, you can either:\r\n\t- Avoid using `tokenizers` before the fork if possible\r\n\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\r\nEmitting ninja build file /home/oriy/.cache/torch_extensions/utils/build.ninja...\r\nBuilding extension module utils...\r\nAllowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)\r\nhuggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\r\nTo disable this warning, you can either:\r\n\t- Avoid using `tokenizers` before the fork if possible\r\n\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\r\nninja: no work to do.\r\nLoading extension module utils...\r\nTime to load utils op: 0.29197263717651367 seconds\r\n[2021-03-27 17:02:52,437] [INFO] [stage2.py:130:__init__] Reduce bucket size 200000000.0\r\n[2021-03-27 17:02:52,438] [INFO] [stage2.py:131:__init__] Allgather bucket size 200000000.0\r\n[2021-03-27 17:02:52,438] [INFO] [stage2.py:132:__init__] CPU Offload: True\r\n[2021-03-27 17:02:52,846] [INFO] [stage2.py:399:__init__] optimizer state initialized\r\n[2021-03-27 17:02:52,846] [INFO] [logging.py:60:log_dist] [Rank 0] DeepSpeed Final Optimizer = adam\r\n[2021-03-27 17:02:52,847] [INFO] [engine.py:439:_configure_lr_scheduler] DeepSpeed using configured LR scheduler = WarmupLR\r\n[2021-03-27 17:02:52,847] [INFO] [logging.py:60:log_dist] [Rank 0] DeepSpeed LR Scheduler = <deepspeed.runtime.lr_schedules.WarmupLR object at 0x7fea742ef2b0>\r\n[2021-03-27 17:02:52,847] [INFO] [logging.py:60:log_dist] [Rank 0] step=0, skipped=0, lr=[5e-05], mom=[[0.9, 0.999]]\r\n[2021-03-27 17:02:52,847] [INFO] [config.py:737:print] DeepSpeedEngine configuration:\r\n[2021-03-27 17:02:52,847] [INFO] [config.py:741:print] activation_checkpointing_config {\r\n \"contiguous_memory_optimization\": false,\r\n \"cpu_checkpointing\": false,\r\n \"number_checkpoints\": null,\r\n \"partition_activations\": false,\r\n \"profile\": false,\r\n \"synchronize_checkpoint_boundary\": false\r\n}\r\n[2021-03-27 17:02:52,847] [INFO] [config.py:741:print] allreduce_always_fp32 ........ False\r\n[2021-03-27 17:02:52,847] [INFO] [config.py:741:print] amp_enabled .................. False\r\n[2021-03-27 17:02:52,847] [INFO] [config.py:741:print] amp_params ................... False\r\n[2021-03-27 17:02:52,847] [INFO] [config.py:741:print] checkpoint_tag_validation_enabled True\r\n[2021-03-27 17:02:52,847] [INFO] [config.py:741:print] checkpoint_tag_validation_fail False\r\n[2021-03-27 17:02:52,847] [INFO] [config.py:741:print] disable_allgather ............ False\r\n[2021-03-27 17:02:52,848] [INFO] [config.py:741:print] dump_state ................... False\r\n[2021-03-27 17:02:52,848] [INFO] [config.py:741:print] dynamic_loss_scale_args ...... {'init_scale': 4294967296, 'scale_window': 1000, 'delayed_shift': 2, 'min_scale': 1}\r\n[2021-03-27 17:02:52,848] [INFO] [config.py:741:print] elasticity_enabled ........... False\r\n[2021-03-27 17:02:52,848] [INFO] [config.py:741:print] flops_profiler_config ........ {\r\n \"detailed\": true,\r\n \"enabled\": false,\r\n \"module_depth\": -1,\r\n \"profile_step\": 1,\r\n \"top_modules\": 3\r\n}\r\n[2021-03-27 17:02:52,848] [INFO] [config.py:741:print] fp16_enabled ................. True\r\n[2021-03-27 17:02:52,848] [INFO] [config.py:741:print] global_rank .................. 0\r\n[2021-03-27 17:02:52,848] [INFO] [config.py:741:print] gradient_accumulation_steps .. 1\r\n[2021-03-27 17:02:52,848] [INFO] [config.py:741:print] gradient_clipping ............ 1.0\r\n[2021-03-27 17:02:52,848] [INFO] [config.py:741:print] gradient_predivide_factor .... 1.0\r\n[2021-03-27 17:02:52,848] [INFO] [config.py:741:print] initial_dynamic_scale ........ 4294967296\r\n[2021-03-27 17:02:52,848] [INFO] [config.py:741:print] loss_scale ................... 0\r\n[2021-03-27 17:02:52,848] [INFO] [config.py:741:print] memory_breakdown ............. False\r\n[2021-03-27 17:02:52,848] [INFO] [config.py:741:print] optimizer_legacy_fusion ...... False\r\n[2021-03-27 17:02:52,848] [INFO] [config.py:741:print] optimizer_name ............... adam\r\n[2021-03-27 17:02:52,848] [INFO] [config.py:741:print] optimizer_params ............. {'lr': 5e-05, 'betas': [0.9, 0.999], 'eps': 1e-08, 'weight_decay': 0.0}\r\n[2021-03-27 17:02:52,848] [INFO] [config.py:741:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0}\r\n[2021-03-27 17:02:52,848] [INFO] [config.py:741:print] pld_enabled .................. False\r\n[2021-03-27 17:02:52,848] [INFO] [config.py:741:print] pld_params ................... False\r\n[2021-03-27 17:02:52,848] [INFO] [config.py:741:print] prescale_gradients ........... False\r\n[2021-03-27 17:02:52,849] [INFO] [config.py:741:print] scheduler_name ............... WarmupLR\r\n[2021-03-27 17:02:52,849] [INFO] [config.py:741:print] scheduler_params ............. {'warmup_min_lr': 0, 'warmup_max_lr': 5e-05, 'warmup_num_steps': 0}\r\n[2021-03-27 17:02:52,849] [INFO] [config.py:741:print] sparse_attention ............. None\r\n[2021-03-27 17:02:52,849] [INFO] [config.py:741:print] sparse_gradients_enabled ..... False\r\n[2021-03-27 17:02:52,849] [INFO] [config.py:741:print] steps_per_print .............. 10\r\n[2021-03-27 17:02:52,849] [INFO] [config.py:741:print] tensorboard_enabled .......... False\r\n[2021-03-27 17:02:52,849] [INFO] [config.py:741:print] tensorboard_job_name ......... DeepSpeedJobName\r\n[2021-03-27 17:02:52,849] [INFO] [config.py:741:print] tensorboard_output_path ...... \r\n[2021-03-27 17:02:52,849] [INFO] [config.py:741:print] train_batch_size ............. 4\r\n[2021-03-27 17:02:52,849] [INFO] [config.py:741:print] train_micro_batch_size_per_gpu 4\r\n[2021-03-27 17:02:52,849] [INFO] [config.py:741:print] wall_clock_breakdown ......... False\r\n[2021-03-27 17:02:52,849] [INFO] [config.py:741:print] world_size ................... 1\r\n[2021-03-27 17:02:52,849] [INFO] [config.py:741:print] zero_allow_untested_optimizer False\r\n[2021-03-27 17:02:52,849] [INFO] [config.py:741:print] zero_config .................. {\r\n \"allgather_bucket_size\": 200000000.0,\r\n \"allgather_partitions\": true,\r\n \"contiguous_gradients\": true,\r\n \"cpu_offload\": true,\r\n \"cpu_offload_params\": false,\r\n \"cpu_offload_use_pin_memory\": false,\r\n \"elastic_checkpoint\": true,\r\n \"load_from_fp32_weights\": true,\r\n \"max_live_parameters\": 1000000000,\r\n \"max_reuse_distance\": 1000000000,\r\n \"overlap_comm\": true,\r\n \"param_persistence_threshold\": 100000,\r\n \"prefetch_bucket_size\": 50000000,\r\n \"reduce_bucket_size\": 200000000.0,\r\n \"reduce_scatter\": true,\r\n \"stage\": 2,\r\n \"sub_group_size\": 1000000000000\r\n}\r\n[2021-03-27 17:02:52,849] [INFO] [config.py:741:print] zero_enabled ................. True\r\n[2021-03-27 17:02:52,849] [INFO] [config.py:741:print] zero_optimization_stage ...... 2\r\n[2021-03-27 17:02:52,850] [INFO] [config.py:748:print] json = {\r\n \"fp16\":{\r\n \"enabled\":true,\r\n \"hysteresis\":2,\r\n \"loss_scale\":0,\r\n \"loss_scale_window\":1000,\r\n \"min_loss_scale\":1\r\n },\r\n \"gradient_accumulation_steps\":1,\r\n \"gradient_clipping\":1.0,\r\n \"optimizer\":{\r\n \"params\":{\r\n \"betas\":[\r\n 0.9,\r\n 0.999\r\n ],\r\n \"eps\":1e-08,\r\n \"lr\":5e-05,\r\n \"weight_decay\":0.0\r\n },\r\n \"type\":\"Adam\"\r\n },\r\n \"scheduler\":{\r\n \"params\":{\r\n \"warmup_max_lr\":5e-05,\r\n \"warmup_min_lr\":0,\r\n \"warmup_num_steps\":0\r\n },\r\n \"type\":\"WarmupLR\"\r\n },\r\n \"train_micro_batch_size_per_gpu\":4,\r\n \"zero_optimization\":{\r\n \"allgather_bucket_size\":200000000.0,\r\n \"allgather_partitions\":true,\r\n \"contiguous_gradients\":true,\r\n \"cpu_offload\":true,\r\n \"overlap_comm\":true,\r\n \"reduce_bucket_size\":200000000.0,\r\n \"reduce_scatter\":true,\r\n \"stage\":2\r\n }\r\n}\r\nUsing /home/oriy/.cache/torch_extensions as PyTorch extensions root...\r\nNo modifications detected for re-loaded extension module utils, skipping build step...\r\nLoading extension module utils...\r\nTime to load utils op: 0.0005881786346435547 seconds\r\n***** Running training *****\r\n Num examples = 287113\r\n Num Epochs = 3\r\n Instantaneous batch size per device = 4\r\n Total train batch size (w. parallel, distributed & accumulation) = 4\r\n Gradient Accumulation steps = 1\r\n Total optimization steps = 215337\r\n 0%| | 0/215337 [00:00<?, ?it/s][2021-03-27 17:02:53,333] [INFO] [stage2.py:1391:step] [deepspeed] fp16 dynamic loss scale overflow! Rank 0 Skipping step. Attempted loss scale: 4294967296, reducing to 4294967296\r\n 0%| | 1/215337 [00:00<26:38:16, 2.25it/s][2021-03-27 17:02:53,687] [INFO] [stage2.py:1391:step] [deepspeed] fp16 dynamic loss scale overflow! Rank 0 Skipping step. Attempted loss scale: 4294967296, reducing to 2147483648.0\r\n",
"Next week I hope https://github.com/huggingface/transformers/pull/10753 will be finished, but for now here are the results on rtx-3090 24GB card with the unfinished zero-3 PR.\r\n\r\nAs you can see Deepspeed zero3's cpu offload is a way way more memory-efficient:\r\n\r\n```\r\n# baseline\r\n\r\nBS=4; CUDA_VISIBLE_DEVICES=0 PYTHONPATH=src USE_TF=0 python examples/seq2seq/run_translation.py --model_name_or_path t5-small --output_dir /tmp/zero3 --overwrite_output_dir --max_train_samples 64 --max_val_samples 64 --max_source_length 128 --max_target_length 128 --val_max_target_length 128 --do_train --num_train_epochs 1 --per_device_train_batch_size $BS --per_device_eval_batch_size $BS --learning_rate 3e-3 --warmup_steps 500 --predict_with_generate --logging_steps 0 --save_steps 0 --eval_steps 1 --group_by_length --adafactor --dataset_name wmt16 --dataset_config ro-en --source_lang en --target_lang ro --source_prefix \"translate English to Romanian: \" \r\n\r\n***** train metrics *****\r\n epoch = 1.0\r\n init_mem_cpu_alloc_delta = 3MB\r\n init_mem_cpu_peaked_delta = 0MB\r\n init_mem_gpu_alloc_delta = 230MB\r\n init_mem_gpu_peaked_delta = 0MB\r\n train_mem_cpu_alloc_delta = 60MB\r\n train_mem_cpu_peaked_delta = 0MB\r\n train_mem_gpu_alloc_delta = 231MB\r\n train_mem_gpu_peaked_delta = 226MB\r\n train_runtime = 3.619\r\n train_samples = 64\r\n train_samples_per_second = 4.421\r\n\r\n# zero2\r\n\r\n\r\nBS=4; PYTHONPATH=src USE_TF=0 deepspeed --num_gpus 1 examples/seq2seq/run_translation.py --model_name_or_path t5-small --output_dir /tmp/zero3 --overwrite_output_dir --max_train_samples 64 --max_val_samples 64 --max_source_length 128 --max_target_length 128 --val_max_target_length 128 --do_train --num_train_epochs 1 --per_device_train_batch_size $BS --per_device_eval_batch_size $BS --learning_rate 3e-3 --warmup_steps 500 --predict_with_generate --logging_steps 0 --save_steps 0 --eval_steps 1 --group_by_length --adafactor --dataset_name wmt16 --dataset_config ro-en --source_lang en --target_lang ro --source_prefix \"translate English to Romanian: \" --deepspeed examples/tests/deepspeed/ds_config_zero2.json\r\n\r\n\r\n***** train metrics *****\r\n epoch = 1.0\r\n init_mem_cpu_alloc_delta = 7MB\r\n init_mem_cpu_peaked_delta = 0MB\r\n init_mem_gpu_alloc_delta = 0MB\r\n init_mem_gpu_peaked_delta = 0MB\r\n train_mem_cpu_alloc_delta = 70MB\r\n train_mem_cpu_peaked_delta = 0MB\r\n train_mem_gpu_alloc_delta = 148MB\r\n train_mem_gpu_peaked_delta = 3559MB\r\n train_runtime = 5.0669\r\n train_samples = 64\r\n train_samples_per_second = 3.158\r\n\r\n\r\n# zero3\r\n\r\n\r\nBS=4; PYTHONPATH=src USE_TF=0 deepspeed --num_gpus 1 examples/seq2seq/run_translation.py --model_name_or_path t5-small --output_dir /tmp/zero3 --overwrite_output_dir --max_train_samples 64 --max_val_samples 64 --max_source_length 128 --max_target_length 128 --val_max_target_length 128 --do_train --num_train_epochs 1 --per_device_train_batch_size $BS --per_device_eval_batch_size $BS --learning_rate 3e-3 --warmup_steps 500 --predict_with_generate --logging_steps 0 --save_steps 0 --eval_steps 1 --group_by_length --adafactor --dataset_name wmt16 --dataset_config ro-en --source_lang en --target_lang ro --source_prefix \"translate English to Romanian: \" --deepspeed examples/tests/deepspeed/ds_config_zero3.json\r\n\r\n***** train metrics *****\r\n epoch = 1.0\r\n init_mem_cpu_alloc_delta = 7MB\r\n init_mem_cpu_peaked_delta = 0MB\r\n init_mem_gpu_alloc_delta = 0MB\r\n init_mem_gpu_peaked_delta = 0MB\r\n train_mem_cpu_alloc_delta = 71MB\r\n train_mem_cpu_peaked_delta = 0MB\r\n train_mem_gpu_alloc_delta = -52MB\r\n train_mem_gpu_peaked_delta = 244MB\r\n train_runtime = 7.6324\r\n train_samples = 64\r\n train_samples_per_second = 2.096\r\n```\r\n\r\nThe config files are from the PR I linked to in the first para.\r\n\r\nSo please give us a few more days - this is also depending on deepspeed merging several PRs and making a new release.\r\n",
"I suspect my cpu memory profiling functions are missing some allocations, which is odd. Surely, there must be more cpu memory used with cpu_offload. I will investigate this. \r\n\r\nSuspecting that `tracemalloc` doesn't tracks c++ allocations, which is what deepspeed does. might have to switch to sampling, but python threads's GIL is a big problem to get correct results.\r\n\r\n**edit:** this should fix it: https://github.com/huggingface/transformers/pull/10937",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,616 | 1,620 | 1,620 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.0.dev0
- deepspeed version: 0.3.13
- Platform: Linux-4.15.0-66-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.8
- PyTorch version (GPU?): 1.8.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@stas00
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
I'm interested in training the large T5 models with deepspeed and huggingface. More specifically, I'm interested in fine-tuning a T5-11B model on one RTX-8000 48 GB GPU (similarly to https://huggingface.co/blog/zero-deepspeed-fairscale, https://github.com/huggingface/transformers/issues/9996).
However, when I try to use deepspeed the amount of memory on the GPU increases. For example, running the example seq2seq/run_summarization.py script with T5-Small and without deepspeed takes ~6GB, and running it with deepspeed takes ~8GB.
Model I am using: T5
The problem arises when using: The official examples/seq2seq/run_summarization.py script.
Without deepspeed:
python examples/seq2seq/run_summarization.py --model_name_or_path t5-small --do_train --do_eval --dataset_name cnn_dailymail --dataset_config "3.0.0" --source_prefix "summarize: " --output_dir /tmp/tst-summarization --per_device_train_batch_size=4 --per_device_eval_batch_size=4 --overwrite_output_dir --predict_with_genera
With deepspeed:
deepspeed examples/seq2seq/run_summarization.py --model_name_or_path t5-small --do_train --do_eval --dataset_name cnn_dailymail --dataset_config "3.0.0" --source_prefix "summarize: " --output_dir /tmp/tst-summarization --per_device_train_batch_size=4 --per_device_eval_batch_size=4 --overwrite_output_dir --predict_with_generate --deepspeed examples/tests/deepspeed/ds_config.json
The tasks I am working on is:
Sequence to sequence generation.
## To reproduce
Steps to reproduce the behavior:
1. Clone transformers repo
2. Install requirements (including deepspeed: pip install deepspeed)
3. Run summarization example without deeepspeed:
python examples/seq2seq/run_summarization.py --model_name_or_path t5-small --do_train --do_eval --dataset_name cnn_dailymail --dataset_config "3.0.0" --source_prefix "summarize: " --output_dir /tmp/tst-summarization --per_device_train_batch_size=4 --per_device_eval_batch_size=4 --overwrite_output_dir --predict_with_genera
4. Run summarization example with deepspeed:
deepspeed examples/seq2seq/run_summarization.py --model_name_or_path t5-small --do_train --do_eval --dataset_name cnn_dailymail --dataset_config "3.0.0" --source_prefix "summarize: " --output_dir /tmp/tst-summarization --per_device_train_batch_size=4 --per_device_eval_batch_size=4 --overwrite_output_dir --predict_with_generate --deepspeed examples/tests/deepspeed/ds_config.json
## Expected behavior
I would expect using deepspeed would reduce the amount of memory being used by the GPU. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10929/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10929/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10928 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10928/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10928/comments | https://api.github.com/repos/huggingface/transformers/issues/10928/events | https://github.com/huggingface/transformers/pull/10928 | 842,408,876 | MDExOlB1bGxSZXF1ZXN0NjAxOTk1MzI4 | 10,928 | Add example for registering callbacks with trainers | {
"login": "amalad",
"id": 12957603,
"node_id": "MDQ6VXNlcjEyOTU3NjAz",
"avatar_url": "https://avatars.githubusercontent.com/u/12957603?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amalad",
"html_url": "https://github.com/amalad",
"followers_url": "https://api.github.com/users/amalad/followers",
"following_url": "https://api.github.com/users/amalad/following{/other_user}",
"gists_url": "https://api.github.com/users/amalad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amalad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amalad/subscriptions",
"organizations_url": "https://api.github.com/users/amalad/orgs",
"repos_url": "https://api.github.com/users/amalad/repos",
"events_url": "https://api.github.com/users/amalad/events{/privacy}",
"received_events_url": "https://api.github.com/users/amalad/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for updating, this looks great!"
] | 1,616 | 1,617 | 1,617 | CONTRIBUTOR | null | # What does this PR do?
Fixes the issue addressed in #9036 by adding an example for registering a custom callback with the Trainer.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
Fixes: https://github.com/huggingface/transformers/issues/9036
## Who can review?
Anyone in the community is free to review the PR. But @sgugger seems the most appropriate. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10928/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10928/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10928",
"html_url": "https://github.com/huggingface/transformers/pull/10928",
"diff_url": "https://github.com/huggingface/transformers/pull/10928.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10928.patch",
"merged_at": 1617640043000
} |
https://api.github.com/repos/huggingface/transformers/issues/10927 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10927/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10927/comments | https://api.github.com/repos/huggingface/transformers/issues/10927/events | https://github.com/huggingface/transformers/issues/10927 | 842,273,358 | MDU6SXNzdWU4NDIyNzMzNTg= | 10,927 | Add Pooler to DistilBERT | {
"login": "peterskipper",
"id": 4040229,
"node_id": "MDQ6VXNlcjQwNDAyMjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4040229?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/peterskipper",
"html_url": "https://github.com/peterskipper",
"followers_url": "https://api.github.com/users/peterskipper/followers",
"following_url": "https://api.github.com/users/peterskipper/following{/other_user}",
"gists_url": "https://api.github.com/users/peterskipper/gists{/gist_id}",
"starred_url": "https://api.github.com/users/peterskipper/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/peterskipper/subscriptions",
"organizations_url": "https://api.github.com/users/peterskipper/orgs",
"repos_url": "https://api.github.com/users/peterskipper/repos",
"events_url": "https://api.github.com/users/peterskipper/events{/privacy}",
"received_events_url": "https://api.github.com/users/peterskipper/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,616 | 1,620 | 1,620 | NONE | null | # 🚀 Feature request
Hi, I'd like to add a Pooler class to the DistilBERT model, whose interface is similar to BertPooler [here](https://github.com/huggingface/transformers/blob/7da995c00c025c4180c7fb0357256b7f83d342ef/src/transformers/models/bert/modeling_bert.py#L610)
## Motivation
I was using your DistilBERT model and discovered that I needed a pooler, so I wrote my own class. Thought I would add it to the repo in case others would like it.
## Your contribution
If this is something you're interested in, I can submit a PR
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10927/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10927/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10926 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10926/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10926/comments | https://api.github.com/repos/huggingface/transformers/issues/10926/events | https://github.com/huggingface/transformers/issues/10926 | 842,256,586 | MDU6SXNzdWU4NDIyNTY1ODY= | 10,926 | Typo in examples/text-classification README | {
"login": "lukemelas",
"id": 13307440,
"node_id": "MDQ6VXNlcjEzMzA3NDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/13307440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lukemelas",
"html_url": "https://github.com/lukemelas",
"followers_url": "https://api.github.com/users/lukemelas/followers",
"following_url": "https://api.github.com/users/lukemelas/following{/other_user}",
"gists_url": "https://api.github.com/users/lukemelas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lukemelas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lukemelas/subscriptions",
"organizations_url": "https://api.github.com/users/lukemelas/orgs",
"repos_url": "https://api.github.com/users/lukemelas/repos",
"events_url": "https://api.github.com/users/lukemelas/events{/privacy}",
"received_events_url": "https://api.github.com/users/lukemelas/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Indeed, do you want to open a PR with the fix since you found it?\r\n\r\nPS: I didn't know that \\`\\`\\`diff feature, it's soooo pretty 🤩 !",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,616 | 1,620 | 1,620 | NONE | null | In the examples/text-classification README, the example scripts for "PyTorch version, no Trainer" are slightly incorrect. They should be adjusted as:
```diff
export TASK_NAME=mrpc
python run_glue_no_trainer.py \
--model_name_or_path bert-base-cased \
--task_name $TASK_NAME \
- --max_seq_length 128 \
+ --max_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 3 \
--output_dir /tmp/$TASK_NAME/
```
Thanks for your great repo!
Luke | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10926/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10926/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10925 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10925/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10925/comments | https://api.github.com/repos/huggingface/transformers/issues/10925/events | https://github.com/huggingface/transformers/pull/10925 | 842,182,893 | MDExOlB1bGxSZXF1ZXN0NjAxODA3MzY2 | 10,925 | Sagemaker test | {
"login": "philschmid",
"id": 32632186,
"node_id": "MDQ6VXNlcjMyNjMyMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/philschmid",
"html_url": "https://github.com/philschmid",
"followers_url": "https://api.github.com/users/philschmid/followers",
"following_url": "https://api.github.com/users/philschmid/following{/other_user}",
"gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/philschmid/subscriptions",
"organizations_url": "https://api.github.com/users/philschmid/orgs",
"repos_url": "https://api.github.com/users/philschmid/repos",
"events_url": "https://api.github.com/users/philschmid/events{/privacy}",
"received_events_url": "https://api.github.com/users/philschmid/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> LGTM! Should the images be added before merge?\r\n\r\nAre you referring to container images? they will be added after the release of `transformers`. If you are referring to the `TODO: Add a screenshot of PR + Text template to make it easy to open.` nope I would add them as soon as we went through the process. To make screenshots while doing it. "
] | 1,616 | 1,617 | 1,617 | MEMBER | null | # What does this PR do?
This PR creates tests for `SageMaker`. This PR creates tests for the `Pytorch` and `tensorflow` DLC. I added a documentation `README.md` which explains when the Tests need to be run and how. Currently, not all tests are leveraging our `examples/` due to limitations like `SageMakerTrainer` not being integrated into `Trainer` and missing implementation for `keras` for SageMaker specific libraries for data/model parallel. In the near future all scripts in `tests/sagemaker/scripts` are going to be removed and copy the scripts from `examples/` before executing the tests.
## Current Tests
| ID | description | plattform | #GPUS | collected & evaluated metrics |
|-------------------------------------|-------------------------------------------------------------------|-----------------------------|-------|------------------------------------------|
| pytorch-transfromers-test-single | test bert finetuning using BERT fromtransformerlib+PT | SageMaker createTrainingJob | 1 | train_runtime, eval_accuracy & eval_loss |
| pytorch-transfromers-test-2-ddp | test bert finetuning using BERT from transformer lib+ PT DPP | SageMaker createTrainingJob | 16 | train_runtime, eval_accuracy & eval_loss |
| pytorch-transfromers-test-2-smd | test bert finetuning using BERT from transformer lib+ PT SM DDP | SageMaker createTrainingJob | 16 | train_runtime, eval_accuracy & eval_loss |
| pytorch-transfromers-test-1-smp | test roberta finetuning using BERT from transformer lib+ PT SM MP | SageMaker createTrainingJob | 8 | train_runtime, eval_accuracy & eval_loss |
| tensorflow-transfromers-test-single | Test bert finetuning using BERT from transformer lib+TF | SageMaker createTrainingJob | 1 | train_runtime, eval_accuracy & eval_loss |
| tensorflow-transfromers-test-2-smd | test bert finetuning using BERT from transformer lib+ TF SM DDP | SageMaker createTrainingJob | 16 | train_runtime, eval_accuracy & eval_loss | | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10925/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10925/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10925",
"html_url": "https://github.com/huggingface/transformers/pull/10925",
"diff_url": "https://github.com/huggingface/transformers/pull/10925.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10925.patch",
"merged_at": 1617085682000
} |
https://api.github.com/repos/huggingface/transformers/issues/10924 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10924/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10924/comments | https://api.github.com/repos/huggingface/transformers/issues/10924/events | https://github.com/huggingface/transformers/issues/10924 | 842,178,957 | MDU6SXNzdWU4NDIxNzg5NTc= | 10,924 | Models not able to run when packed with PyInstaller | {
"login": "giuliodz",
"id": 44986582,
"node_id": "MDQ6VXNlcjQ0OTg2NTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/44986582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/giuliodz",
"html_url": "https://github.com/giuliodz",
"followers_url": "https://api.github.com/users/giuliodz/followers",
"following_url": "https://api.github.com/users/giuliodz/following{/other_user}",
"gists_url": "https://api.github.com/users/giuliodz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/giuliodz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/giuliodz/subscriptions",
"organizations_url": "https://api.github.com/users/giuliodz/orgs",
"repos_url": "https://api.github.com/users/giuliodz/repos",
"events_url": "https://api.github.com/users/giuliodz/events{/privacy}",
"received_events_url": "https://api.github.com/users/giuliodz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! Do you get the same error when not installing PyInstaller, instead using `haystack` in a virtual environment?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Any update on this one?"
] | 1,616 | 1,685 | 1,620 | NONE | null | ## Environment info
- `transformers` version: 4.1.1
- Platform: Linux-5.8.0-48-generic-x86_64-with-glibc2.27
- Python version: 3.8.8
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
-
## Information
I am trying to create an executable for a flask application that uses [haystack](https://github.com/deepset-ai/haystack/) to serve a QA System. Haystack uses transformers.
If I run my API normally with python with `python api.py` it works fine.
When I run `pyinstaller main.spec --distpath distAPI` the executable gets created fine (note that I will post main.spec down in the **To reproduce** section). However, when I run it with `./distAPI/main/main` I get the following error:
```
03/26/2021 16:45:35 - INFO - faiss - Loading faiss with AVX2 support.
03/26/2021 16:45:35 - INFO - faiss - Loading faiss.
Traceback (most recent call last):
File "torch/_utils_internal.py", line 49, in get_source_lines_and_file
File "inspect.py", line 979, in getsourcelines
File "inspect.py", line 798, in findsource
OSError: could not get source code
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "main.py", line 8, in <module>
from haystack.preprocessor.cleaning import clean_wiki_text
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "PyInstaller/loader/pyimod03_importers.py", line 531, in exec_module
File "haystack/__init__.py", line 5, in <module>
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "PyInstaller/loader/pyimod03_importers.py", line 531, in exec_module
File "haystack/finder.py", line 9, in <module>
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "PyInstaller/loader/pyimod03_importers.py", line 531, in exec_module
File "haystack/retriever/base.py", line 9, in <module>
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "PyInstaller/loader/pyimod03_importers.py", line 531, in exec_module
File "haystack/document_store/base.py", line 6, in <module>
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "PyInstaller/loader/pyimod03_importers.py", line 531, in exec_module
File "haystack/preprocessor/utils.py", line 11, in <module>
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "PyInstaller/loader/pyimod03_importers.py", line 531, in exec_module
File "farm/data_handler/utils.py", line 18, in <module>
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "PyInstaller/loader/pyimod03_importers.py", line 531, in exec_module
File "farm/file_utils.py", line 26, in <module>
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "PyInstaller/loader/pyimod03_importers.py", line 531, in exec_module
File "transformers/__init__.py", line 91, in <module>
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "PyInstaller/loader/pyimod03_importers.py", line 531, in exec_module
File "transformers/modelcard.py", line 31, in <module>
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "PyInstaller/loader/pyimod03_importers.py", line 531, in exec_module
File "transformers/models/auto/__init__.py", line 20, in <module>
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "PyInstaller/loader/pyimod03_importers.py", line 531, in exec_module
File "transformers/models/auto/configuration_auto.py", line 28, in <module>
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "PyInstaller/loader/pyimod03_importers.py", line 531, in exec_module
File "transformers/models/deberta/__init__.py", line 25, in <module>
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "PyInstaller/loader/pyimod03_importers.py", line 531, in exec_module
File "transformers/models/deberta/modeling_deberta.py", line 462, in <module>
File "torch/jit/_script.py", line 936, in script
File "torch/jit/frontend.py", line 197, in get_jit_def
File "torch/_utils_internal.py", line 56, in get_source_lines_and_file
OSError: Can't get source for <function c2p_dynamic_expand at 0x7f45cc4d85e0>. TorchScript requires source access in order to carry out compilation, make sure original .py files are available.
[104072] Failed to execute script main
```
It seems that it cannot get the source code for the `function c2p_dynamic_expand` in `transformers/models/deberta/modeling_deberta.py`.
## Additional information
This is a problem that already happened in the past when using Pyinstaller and Torch.
See this issue [here](https://github.com/pyinstaller/pyinstaller/issues/4926) for example.
## To reproduce
Steps to reproduce the behavior:
1.Make a main.py :
```
from haystack.preprocessor.cleaning import clean_wiki_text
if __name__=='__main__':
print('Hello World')
```
2. Install haystack 'pip install haystack`
3. See if it runs with `python main.py' (it should)
4. Install pyinstaller with `pip install pyinstaller`
5. Create a hooks/ folder containing the following files:
**hook-justext.py hook-packaging.py hook-requests.py hook-tokenizers.py hook-tqdm.py hook-transformers.py hook-filelock.py hook-numpy.py hook-regex.py hook-sacremoses.py hook-torch.py**
Each of these files should provide a hook for pyinstaller to to download that module. So, for example the hook-numpy.py file should be:
```
from PyInstaller.utils.hooks import collect_all
datas, binaries, hiddenimports = collect_all('numpy')
```
And so the rest of them.
6. create a main.spec file:
```
# -*- mode: python ; coding: utf-8 -*-
block_cipher = None
a = Analysis(['main.py'],
pathex=['/Wexond/QandA/api'],
binaries=[],
datas=[],
hiddenimports=['justext'],
hookspath=['./hooks/'], ## <-------------- Specifying the hooks
runtime_hooks=[],
excludes=[],
win_no_prefer_redirects=False,
win_private_assemblies=False,
cipher=block_cipher,
noarchive=False)
pyz = PYZ(a.pure, a.zipped_data, source_files_toc,
cipher=block_cipher)
exe = EXE(pyz,
a.scripts,
[],
exclude_binaries=True,
name='main',
debug=False,
bootloader_ignore_signals=False,
strip=False,
upx=True,
console=True )
coll = COLLECT(exe,
a.binaries,
a.zipfiles,
a.datas,
strip=False,
upx=True,
upx_exclude=[],
name='main')
```
7. Pack the app with `pyinstaller main.spec --distpath distAPI`
8. Try to run it with `./distAPI/main/main` .
You should now get the `OsError` metioned above..
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10924/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10924/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10923 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10923/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10923/comments | https://api.github.com/repos/huggingface/transformers/issues/10923/events | https://github.com/huggingface/transformers/issues/10923 | 842,167,096 | MDU6SXNzdWU4NDIxNjcwOTY= | 10,923 | /pytorch/xla/torch_xla/csrc/helpers.h:100 : Check failed: scalar_value.isIntegral() | {
"login": "mabdullah1994",
"id": 18423941,
"node_id": "MDQ6VXNlcjE4NDIzOTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/18423941?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mabdullah1994",
"html_url": "https://github.com/mabdullah1994",
"followers_url": "https://api.github.com/users/mabdullah1994/followers",
"following_url": "https://api.github.com/users/mabdullah1994/following{/other_user}",
"gists_url": "https://api.github.com/users/mabdullah1994/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mabdullah1994/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mabdullah1994/subscriptions",
"organizations_url": "https://api.github.com/users/mabdullah1994/orgs",
"repos_url": "https://api.github.com/users/mabdullah1994/repos",
"events_url": "https://api.github.com/users/mabdullah1994/events{/privacy}",
"received_events_url": "https://api.github.com/users/mabdullah1994/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I don't think Longformer is supported on TPU, @patrickvonplaten will confirm.",
"@sgugger Thanks!\r\nLooking forward to @patrickvonplaten confirmation. ",
"Hey @mabdullah1994, yeah `Longformer` is sadly not yet supported on TPU. We just merged Big Bird: https://huggingface.co/transformers/master/model_doc/bigbird.html though, which should work on TPU. It would be amazing if you could try it out :-)",
"@patrickvonplaten Thanks for the update Patrick!\r\nJust a quick query: I have a dataset with large sequences and I don't want to truncate the text. What options do I have? Will XLNet be able to handle large sequences with pre-trained models? Could you point me towards an example of using stride for this use case? Thanks!",
"Well, tried `BigBird` and getting a similar error on Google Colab\r\n\r\n```\r\nRuntimeError: torch_xla/csrc/tensor_methods.cpp:880 : Check failed: xla::ShapeUtil::Compatible(shapes.back(), tensor_shape) \r\n*** Begin stack trace ***\r\n\ttensorflow::CurrentStackTrace()\r\n\ttorch_xla::XLATensor::cat(absl::lts_2020_02_25::Span<torch_xla::XLATensor const>, long)\r\n\ttorch_xla::AtenXlaType::cat(c10::ArrayRef<at::Tensor>, long)\r\n\tc10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<at::Tensor (*)(c10::ArrayRef<at::Tensor>, long), at::Tensor, c10::guts::typelist::typelist<c10::ArrayRef<at::Tensor>, long> >, at::Tensor (c10::ArrayRef<at::Tensor>, long)>::call(c10::OperatorKernel*, c10::ArrayRef<at::Tensor>, long)\r\n\t\r\n\tat::cat(c10::ArrayRef<at::Tensor>, long)\r\n\t\r\n\t\r\n\t\r\n\tat::cat(c10::ArrayRef<at::Tensor>, long)\r\n\t\r\n\t_PyMethodDef_RawFastCallKeywords\r\n\t_PyCFunction_FastCallKeywords\r\n\t_PyEval_EvalFrameDefault\r\n\t_PyEval_EvalCodeWithName\r\n\t_PyFunction_FastCallKeywords\r\n\t_PyEval_EvalFrameDefault\r\n\t_PyEval_EvalCodeWithName\r\n\t_PyObject_Call_Prepend\r\n\tPyObject_Call\r\n\t_PyEval_EvalFrameDefault\r\n\t_PyEval_EvalCodeWithName\r\n\t_PyObject_Call_Prepend\r\n\t\r\n\t_PyObject_FastCallKeywords\r\n\t_PyEval_EvalFrameDefault\r\n\t_PyEval_EvalCodeWithName\r\n\t_PyObject_Call_Prepend\r\n\tPyObject_Call\r\n\t_PyEval_EvalFrameDefault\r\n\t_PyEval_EvalCodeWithName\r\n\t_PyObject_Call_Prepend\r\n\t\r\n\t_PyObject_FastCallKeywords\r\n\t_PyEval_EvalFrameDefault\r\n\t_PyEval_EvalCodeWithName\r\n\t_PyObject_Call_Prepend\r\n\tPyObject_Call\r\n\t_PyEval_EvalFrameDefault\r\n\t_PyEval_EvalCodeWithName\r\n\t_PyObject_Call_Prepend\r\n\t\r\n\t_PyObject_FastCallKeywords\r\n\t_PyEval_EvalFrameDefault\r\n\t_PyEval_EvalCodeWithName\r\n\t_PyObject_Call_Prepend\r\n\tPyObject_Call\r\n\t_PyEval_EvalFrameDefault\r\n\t_PyEval_EvalCodeWithName\r\n\t_PyObject_Call_Prepend\r\n\t\r\n\t_PyObject_FastCallKeywords\r\n\t_PyEval_EvalFrameDefault\r\n\t_PyEval_EvalCodeWithName\r\n\t_PyObject_Call_Prepend\r\n\tPyObject_Call\r\n\t_PyEval_EvalFrameDefault\r\n\t_PyEval_EvalCodeWithName\r\n\t_PyObject_Call_Prepend\r\n\t\r\n\t_PyObject_FastCallKeywords\r\n\t_PyEval_EvalFrameDefault\r\n\t_PyEval_EvalCodeWithName\r\n\t_PyObject_Call_Prepend\r\n\tPyObject_Call\r\n\t_PyEval_EvalFrameDefault\r\n\t_PyEval_EvalCodeWithName\r\n\t_PyObject_Call_Prepend\r\n\t\r\n\tPyObject_Call\r\n\t_PyEval_EvalFrameDefault\r\n\t_PyEval_EvalCodeWithName\r\n\t_PyFunction_FastCallKeywords\r\n\t_PyEval_EvalFrameDefault\r\n\t_PyFunction_FastCallKeywords\r\n\t_PyEval_EvalFrameDefault\r\n\t_PyEval_EvalCodeWithName\r\n\t_PyFunction_FastCallKeywords\r\n\t_PyEval_EvalFrameDefault\r\n\t_PyFunction_FastCallDict\r\n\t_PyEval_EvalFrameDefault\r\n\t_PyFunction_FastCallKeywords\r\n\t_PyEval_EvalFrameDefault\r\n\t_PyEval_EvalCodeWithName\r\n\t_PyFunction_FastCallKeywords\r\n\t_PyEval_EvalFrameDefault\r\n\t_PyEval_EvalCodeWithName\r\n\tPyEval_EvalCode\r\n\t\r\n\t_PyMethodDef_RawFastCallKeywords\r\n\t_PyCFunction_FastCallKeywords\r\n\t_PyEval_EvalFrameDefault\r\n\t_PyEval_EvalCodeWithName\r\n\t_PyFunction_FastCallKeywords\r\n\t_PyEval_EvalFrameDefault\r\n\t_PyEval_EvalCodeWithName\r\n\t_PyFunction_FastCallKeywords\r\n\t_PyEval_EvalFrameDefault\r\n\t_PyEval_EvalCodeWithName\r\n\t_PyObject_Call_Prepend\r\n\tPyObject_Call\r\n\t_PyEval_EvalFrameDefault\r\n\t_PyEval_EvalCodeWithName\r\n\t_PyFunction_FastCallKeywords\r\n\t_PyEval_EvalFrameDefault\r\n\t_PyEval_EvalCodeWithName\r\n\t_PyFunction_FastCallKeywords\r\n\t_PyEval_EvalFrameDefault\r\n\t_PyFunction_FastCallKeywords\r\n\t_PyEval_EvalFrameDefault\r\n\t_PyFunction_FastCallKeywords\r\n\t_PyEval_EvalFrameDefault\r\n\t_PyEval_EvalCodeWithName\r\n\t_PyFunction_FastCallDict\r\n\t_PyEval_EvalFrameDefault\r\n\t_PyEval_EvalCodeWithName\r\n\t_PyFunction_FastCallDict\r\n\t_PyEval_EvalFrameDefault\r\n\t_PyEval_EvalCodeWithName\r\n\t_PyFunction_FastCallKeywords\r\n\t_PyEval_EvalFrameDefault\r\n\t_PyFunction_FastCallKeywords\r\n\t_PyEval_EvalFrameDefault\r\n\t_PyObject_Call_Prepend\r\n\tPyObject_Call\r\n\t_PyEval_EvalFrameDefault\r\n\t_PyEval_EvalCodeWithName\r\n\t_PyFunction_FastCallKeywords\r\n*** End stack trace ***\r\ns64[1,1,1]{2,1,0} vs. f32[1,1,1]{2,1,0}\r\n```",
"Hey @mabdullah1994, \r\n\r\nCould you maybe open a new issue showcasing that big bird doesn't work on PyTorch/XLA? :-)",
"Hey @patrickvonplaten \r\n\r\nJust created a new issue #11363 with the details of the BigBird issue. Please advice. Thanks!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Any updates on this?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"hey @patrickvonplaten, with the release of the new trainer should this issue be resolved. I'm using the latest version of transformers and still getting this for models like [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) running on TPU."
] | 1,616 | 1,656 | 1,624 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.4.2
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.0+cu101 (False)
- Tensorflow version (GPU?):
- Using GPU in script?: TPU
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@patrickvonplaten @sgugger
## Information
I am using LongformerForSequenceClassification and LongformerTokenizerFast for a simple text classification problem on Google Colab TPU:
The problem arises when using:
* [ ] my own modified scripts: (Script shared) If I replace the LongformerForSequenceClassification model with the DistilBertForSequenceClassification model, the same code works perfectly fine and the training starts without any issues. However, with LongformerForSequenceClassification, I start getting weird errors on TPU.
```
from pathlib import Path
def read_imdb_split(split_dir):
split_dir = Path(split_dir)
texts = []
labels = []
for label_dir in ["pos", "neg"]:
for text_file in (split_dir/label_dir).iterdir():
texts.append(text_file.read_text())
labels.append(0 if label_dir is "neg" else 1)
return texts, labels
train_texts, train_labels = read_imdb_split('aclImdb/train')
test_texts, test_labels = read_imdb_split('aclImdb/test')
from sklearn.model_selection import train_test_split
train_texts, val_texts, train_labels, val_labels = train_test_split(train_texts, train_labels, test_size=.2)
from transformers import DistilBertTokenizerFast, LongformerTokenizerFast
# tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased')
tokenizer = LongformerTokenizerFast.from_pretrained('allenai/longformer-base-4096', max_length = 8)
train_encodings = tokenizer(train_texts, truncation=True, padding=True)
val_encodings = tokenizer(val_texts, truncation=True, padding=True)
test_encodings = tokenizer(test_texts, truncation=True, padding=True)
import torch
class IMDbDataset(torch.utils.data.Dataset):
def __init__(self, encodings, labels):
self.encodings = encodings
self.labels = labels
def __getitem__(self, idx):
item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
item['labels'] = torch.tensor(self.labels[idx])
return item
def __len__(self):
return len(self.labels)
train_dataset = IMDbDataset(train_encodings, train_labels)
val_dataset = IMDbDataset(val_encodings, val_labels)
test_dataset = IMDbDataset(test_encodings, test_labels)
from transformers import DistilBertForSequenceClassification, Trainer, TrainingArguments, LongformerForSequenceClassification
import torch_xla.distributed.xla_multiprocessing as xmp
import torch_xla.core.xla_model as xm
def _mp_fn(index):
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=3, # total number of training epochs
per_device_train_batch_size=16, # batch size per device during training
per_device_eval_batch_size=64, # batch size for evaluation
warmup_steps=500, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir='./logs', # directory for storing logs
logging_steps=10,
)
# model = DistilBertForSequenceClassification.from_pretrained('distilbert-base-uncased')
model = LongformerForSequenceClassification.from_pretrained("allenai/longformer-base-4096", attention_window = 2)
trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=val_dataset # evaluation dataset
)
trainer.train()
xmp.spawn(_mp_fn, args=(), nprocs=1, start_method='fork')
```
The tasks I am working on is:
* [ ] my own task or dataset: Using the IMDB Dataset for Text Classification
## To reproduce
Steps to reproduce the behavior:
1. Setup TPU-client on google Colab: !pip install cloud-tpu-client==0.10 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.8-cp37-cp37m-linux_x86_64.whl
2. Download the dataset:
a. !wget http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
b. !tar -xf aclImdb_v1.tar.gz
3. Execute the given script
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
RuntimeError: /pytorch/xla/torch_xla/csrc/helpers.h:100 : Check failed: scalar_value.isIntegral()
*** Begin stack trace ***
tensorflow::CurrentStackTrace()
torch_xla::XlaHelpers::ScalarValue(c10::Scalar, xla::PrimitiveType, xla::XlaBuilder*)
torch_xla::ir::ops::InferOutputShape(absl::lts_2020_02_25::Span<xla::Shape const>, std::function<xla::XlaOp (absl::lts_2020_02_25::Span<xla::XlaOp const>)> const&)
torch_xla::ir::Node::GetOpShape(std::function<xla::Shape ()> const&) const
torch_xla::ir::Node::Node(torch_xla::ir::OpKind, absl::lts_2020_02_25::Span<torch_xla::ir::Value const>, std::function<xla::Shape ()> const&, unsigned long, absl::lts_2020_02_25::uint128)
torch_xla::ir::ops::ConstantPadNd::ConstantPadNd(torch_xla::ir::Value const&, std::vector<long, std::allocator<long> >, c10::Scalar)
void __gnu_cxx::new_allocator<torch_xla::ir::ops::ConstantPadNd>::construct<torch_xla::ir::ops::ConstantPadNd, torch_xla::ir::Value, std::vector<long, std::allocator<long> >&, c10::Scalar&>(torch_xla::ir::ops::ConstantPadNd*, torch_xla::ir::Value&&, std::vector<long, std::allocator<long> >&, c10::Scalar&)
torch_xla::XLATensor::constant_pad_nd(torch_xla::XLATensor const&, absl::lts_2020_02_25::Span<long const>, c10::Scalar)
torch_xla::AtenXlaType::constant_pad_nd(at::Tensor const&, c10::ArrayRef<long>, c10::Scalar)
c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<at::Tensor (*)(at::Tensor const&, c10::ArrayRef<long>, c10::Scalar), at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, c10::ArrayRef<long>, c10::Scalar> >, at::Tensor (at::Tensor const&, c10::ArrayRef<long>, c10::Scalar)>::call(c10::OperatorKernel*, at::Tensor const&, c10::ArrayRef<long>, c10::Scalar)
at::constant_pad_nd(at::Tensor const&, c10::ArrayRef<long>, c10::Scalar)
at::constant_pad_nd(at::Tensor const&, c10::ArrayRef<long>, c10::Scalar)
_PyMethodDef_RawFastCallKeywords
_PyCFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyEval_EvalCodeWithName
_PyFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyEval_EvalCodeWithName
_PyFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyEval_EvalCodeWithName
_PyObject_Call_Prepend
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalCodeWithName
_PyObject_Call_Prepend
_PyObject_FastCallKeywords
_PyEval_EvalFrameDefault
_PyEval_EvalCodeWithName
_PyObject_Call_Prepend
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalCodeWithName
_PyObject_Call_Prepend
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalCodeWithName
_PyFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyEval_EvalCodeWithName
_PyFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyFunction_FastCallDict
_PyEval_EvalFrameDefault
_PyFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyEval_EvalCodeWithName
_PyFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyEval_EvalCodeWithName
PyEval_EvalCode
_PyMethodDef_RawFastCallKeywords
_PyCFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyEval_EvalCodeWithName
_PyFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyEval_EvalCodeWithName
_PyFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyEval_EvalCodeWithName
_PyObject_Call_Prepend
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalCodeWithName
_PyFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyEval_EvalCodeWithName
_PyFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyEval_EvalCodeWithName
_PyFunction_FastCallDict
_PyEval_EvalFrameDefault
_PyEval_EvalCodeWithName
_PyFunction_FastCallDict
_PyEval_EvalFrameDefault
_PyEval_EvalCodeWithName
_PyFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyEval_EvalCodeWithName
_PyFunction_FastCallDict
_PyEval_EvalFrameDefault
_PyEval_EvalCodeWithName
_PyObject_FastCallDict
_PyObject_FastCallKeywords
_PyEval_EvalFrameDefault
_PyObject_Call_Prepend
_PyObject_FastCallKeywords
_PyMethodDef_RawFastCallDict
PyCFunction_Call
_PyEval_EvalFrameDefault
_PyFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyEval_EvalCodeWithName
_PyFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyEval_EvalCodeWithName
PyEval_EvalCode
*** End stack trace ***
Scalar type not supported
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Model training should have started but instead got the error
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10923/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10923/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10922 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10922/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10922/comments | https://api.github.com/repos/huggingface/transformers/issues/10922/events | https://github.com/huggingface/transformers/issues/10922 | 841,894,936 | MDU6SXNzdWU4NDE4OTQ5MzY= | 10,922 | Use reformer in down stream task meet problem | {
"login": "LeopoldACC",
"id": 44536699,
"node_id": "MDQ6VXNlcjQ0NTM2Njk5",
"avatar_url": "https://avatars.githubusercontent.com/u/44536699?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LeopoldACC",
"html_url": "https://github.com/LeopoldACC",
"followers_url": "https://api.github.com/users/LeopoldACC/followers",
"following_url": "https://api.github.com/users/LeopoldACC/following{/other_user}",
"gists_url": "https://api.github.com/users/LeopoldACC/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LeopoldACC/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LeopoldACC/subscriptions",
"organizations_url": "https://api.github.com/users/LeopoldACC/orgs",
"repos_url": "https://api.github.com/users/LeopoldACC/repos",
"events_url": "https://api.github.com/users/LeopoldACC/events{/privacy}",
"received_events_url": "https://api.github.com/users/LeopoldACC/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"If I train the model on the crime-and-punishment\r\n```shell\r\npython examples/language-modeling/run_clm.py --model_name_or_path google/reformer-crime-and-punishment --dataset_name crime_and_punish --do_train --do_eval --output_dir /home2/zhenggo1/checkpoint/reformer_clm\r\n```\r\nthe bug is below\r\n```python\r\nAssertionError: If training, make sure that config.axial_pos_shape factors: (512, 1024) multiply to sequence length. Got prod((512, 1024)) != sequence_length: 1024. You might want to consider padding your sequence length to 524288 or changing config.axial_pos_shape.\r\n```",
"Hi,\r\n\r\nI have been playing around with Reformer these few days so I hope I can give some insights. Axial positional encoding in Reformer requires that sequence length must be fixed to the product of `axial_pos_embds_dim`. See the documentation here\r\n\r\nhttps://huggingface.co/transformers/model_doc/reformer.html#axial-positional-encodings\r\n\r\nSo you have to either pad the sequence length to that fixed size, or change the value for `axial_pos_embds_dim` to a smaller value. Due to this reason, I believe example scripts won't work with Reformer out of the box.\r\n\r\nThe Reformer examples from Google's Trax actually don't use axial positional encoding, just normal positional encoding (see [here](https://github.com/google/trax/blob/master/trax/examples/NER_using_Reformer.ipynb)). So I actually disable axial positional encoding (passing `axial_pos_embds=False` to Reformer config) and it works fine. By disabling this, I can also use dynamic padding (pad to max length within a batch) and saves even more memory.\r\n\r\nI haven't tested the accuracy difference between with and without axial positional encoding. But axial positional encoding is so slow for a dataset with varying sequence lengths that I find it impractical.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,616 | 1,620 | 1,620 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:4.2.2
- Platform:CentOS
- Python version:3.7
- PyTorch version (GPU?):1.5.1 cpu only
- Tensorflow version (GPU?):
- Using GPU in script?:no
- Using distributed or parallel set-up in script?:no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [x] the official example scripts:
### sequence classification task under glue
- bug
```
Traceback (most recent call last):
File "examples/text-classification/run_glue.py", line 584, in <module>
main()
File "examples/text-classification/run_glue.py", line 410, in main
datasets = datasets.map(preprocess_function, batched=True, load_from_cache_file=not data_args.overwrite_cache)
File "/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/dataset_dict.py", line 386, in map
for k, dataset in self.items()
File "/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/dataset_dict.py", line 386, in <dictcomp>
for k, dataset in self.items()
File "/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1120, in map
update_data = does_function_return_dict(test_inputs, test_indices)
File "/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1091, in does_function_return_dict
function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
File "examples/text-classification/run_glue.py", line 403, in preprocess_function
result = tokenizer(*args, padding=padding, max_length=data_args.max_seq_length, truncation=True)
File "/home2/zhenggo1/LowPrecisionInferenceTool/examples/pytorch/huggingface_transformers/src/transformers/tokenization_utils_base.py", line 2335, in __call__
**kwargs,
File "/home2/zhenggo1/LowPrecisionInferenceTool/examples/pytorch/huggingface_transformers/src/transformers/tokenization_utils_base.py", line 2500, in batch_encode_plus
**kwargs,
File "/home2/zhenggo1/LowPrecisionInferenceTool/examples/pytorch/huggingface_transformers/src/transformers/tokenization_utils_base.py", line 2217, in _get_padding_truncation_strategies
"Asking to pad but the tokenizer does not have a padding token. "
ValueError: Asking to pad but the tokenizer does not have a padding token. Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`.
```
- shell code
```python
python examples/text-classification/run_glue.py --model_type reformer --model_name_or_path google/reformer-crime-and-punishment --task_name $TASK_NAME --do_train --do_eval --max_seq_length 512 --per_gpu_eval_batch_size=32 --per_gpu_train_batch_size=32 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir /home2/zhenggo1/checkpoint/reformer_mrpc
```
### translation task under wmt_en_ro
- bug
```
Traceback (most recent call last):
File "examples/seq2seq/finetune_trainer.py", line 451, in <module>
main()
File "examples/seq2seq/finetune_trainer.py", line 215, in main
cache_dir=model_args.cache_dir,
File "/home2/zhenggo1/LowPrecisionInferenceTool/examples/pytorch/huggingface_transformers/src/transformers/models/auto/modeling_auto.py", line 1226, in from_pretrained
", ".join(c.__name__ for c in MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING.keys()),
ValueError: Unrecognized configuration class <class 'transformers.models.reformer.configuration_reformer.ReformerConfig'> for this kind of AutoModel: AutoModelForSeq2SeqLM.
Model type should be one of LEDConfig, BlenderbotSmallConfig, MT5Config, T5Config, PegasusConfig, MarianConfig, MBartConfig, BlenderbotConfig, BartConfig, FSMTConfig, EncoderDecoderConfig, XLMProphetNetConfig, ProphetNetConfig.
```
- shell code
```python
python examples/seq2seq/finetune_trainer.py --model_name_or_path google/reformer-crime-and-punishment --do_train --do_eval --task translation_en_to_ro --data_dir examples/seq2seq/test_data/wmt_en_ro/ --output_dir /home2/zhenggo1/checkpoint/reformer_translation --per_device_train_batch_size=4 --per_device_eval_batch_size=4 --overwrite_output_dir --predict_with_generate
```
### clm task under wikitext
- bug
```
Traceback (most recent call last):
File "examples/language-modeling/run_clm.py", line 472, in <module>
main()
File "examples/language-modeling/run_clm.py", line 365, in main
train_result = trainer.train(model_path=model_path)
File "/home2/zhenggo1/LowPrecisionInferenceTool/examples/pytorch/huggingface_transformers/src/transformers/trainer.py", line 888, in train
tr_loss += self.training_step(model, inputs)
File "/home2/zhenggo1/LowPrecisionInferenceTool/examples/pytorch/huggingface_transformers/src/transformers/trainer.py", line 1250, in training_step
loss = self.compute_loss(model, inputs)
File "/home2/zhenggo1/LowPrecisionInferenceTool/examples/pytorch/huggingface_transformers/src/transformers/trainer.py", line 1277, in compute_loss
outputs = model(**inputs)
File "/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home2/zhenggo1/LowPrecisionInferenceTool/examples/pytorch/huggingface_transformers/src/transformers/models/reformer/modeling_reformer.py", line 2244, in forward
return_dict=return_dict,
File "/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home2/zhenggo1/LowPrecisionInferenceTool/examples/pytorch/huggingface_transformers/src/transformers/models/reformer/modeling_reformer.py", line 2090, in forward
start_idx_pos_encodings=start_idx_pos_encodings,
File "/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home2/zhenggo1/LowPrecisionInferenceTool/examples/pytorch/huggingface_transformers/src/transformers/models/reformer/modeling_reformer.py", line 264, in forward
position_embeddings = self.position_embeddings(position_ids)
File "/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home2/zhenggo1/LowPrecisionInferenceTool/examples/pytorch/huggingface_transformers/src/transformers/models/reformer/modeling_reformer.py", line 158, in forward
self.axial_pos_shape, self.axial_pos_shape, sequence_length, reduce(mul, self.axial_pos_shape)
AssertionError: If training, make sure that config.axial_pos_shape factors: (512, 1024) multiply to sequence length. Got prod((512, 1024)) != sequence_length: 1024. You might want to consider padding your sequence length to 524288 or changing config.axial_pos_shape.
```
- shell code
```python
python examples/language-modeling/run_clm.py --model_name_or_path google/reformer-crime-and-punishment --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --do_eval --output_dir /home2/zhenggo1/checkpoint/reformer_clm
```
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. run shell code as shown as above(translation dataset may not use the local)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I just need a task to do evaluation and can compute a metrics.
Thks a lot if you can help me to give just a task that I can do evaluation!!!
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10922/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10922/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10921 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10921/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10921/comments | https://api.github.com/repos/huggingface/transformers/issues/10921/events | https://github.com/huggingface/transformers/issues/10921 | 841,642,760 | MDU6SXNzdWU4NDE2NDI3NjA= | 10,921 | Tokenizer is adding ## to every word from the second. | {
"login": "leoxu1007",
"id": 22413258,
"node_id": "MDQ6VXNlcjIyNDEzMjU4",
"avatar_url": "https://avatars.githubusercontent.com/u/22413258?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leoxu1007",
"html_url": "https://github.com/leoxu1007",
"followers_url": "https://api.github.com/users/leoxu1007/followers",
"following_url": "https://api.github.com/users/leoxu1007/following{/other_user}",
"gists_url": "https://api.github.com/users/leoxu1007/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leoxu1007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leoxu1007/subscriptions",
"organizations_url": "https://api.github.com/users/leoxu1007/orgs",
"repos_url": "https://api.github.com/users/leoxu1007/repos",
"events_url": "https://api.github.com/users/leoxu1007/events{/privacy}",
"received_events_url": "https://api.github.com/users/leoxu1007/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Maybe @polm or @singletongue have an idea!",
"I didn't implement BertTokenizer so I'm a little out of my depth here, but the code below in a clean environment worked fine for me with no weird hashes.\r\n\r\n```\r\nfrom transformers import BertJapaneseTokenizer\r\n\r\nname = \"cl-tohoku/bert-base-japanese-whole-word-masking\"\r\nname = \"cl-tohoku/bert-base-japanese\"\r\ntokenizer = BertJapaneseTokenizer.from_pretrained(name)\r\n\r\ntext = \"テレビでサッカーの試合を見る。\"\r\n\r\nout = tokenizer.tokenize(text)\r\nprint(out)\r\n```\r\n\r\nI will note it is especially weird that the last word in your list (`。`) doesn't have the hashes.",
"Thank you for you reply.\r\nHere is the result.\r\n['テレビ', '##で', '##サッカー', '##の', '##試', '##合', '##を', '##見', '##る', '。']\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"@leoxu1007\r\nCould it be possible that you set `word_tokenizer_type` to `basic` ?\r\nI reproduced the same result by this configuration.\r\nI mean I got ['テレビ', '##で', '##サッカー', '##の', '##試', '##合', '##を', '##見', '##る', '。'].\r\n\r\nNow, `BertJapaneseTokenizer` pretrained tokenizer's default configuration is `word_tokenizer_type='mecab'`.\r\nSo we don't usually get this unexpected result.\r\nI tried the example with `mecab` I got ['テレビ', 'で', 'サッカー', 'の', '試合', 'を', '見る', '。'].\r\n"
] | 1,616 | 1,625 | 1,620 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.4.2
- Platform: Linux-5.8.0-44-generic-x86_64-with-Ubuntu-20.04-focal
- Python version: 3.6.13
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
tokenizers: @LysandreJik
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ O ] the official example scripts: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. The tokenizer is adding ## to every words from the second.
For example, the code is:
text = 'テレビでサッカーの試合を見る。'
tokenized_text = tokenizer.tokenize(text)
Output is expected to be:['テレビ', 'で', 'サッカー', 'の', '試合', 'を', '見る', '。']
But I get ['テレビ', '##で', '##サッカー', '##の', '##試', '##合', '##を', '##見', '##る', '。']
I dont know why it add ## at the start of the words...
```
import torch
from transformers import BertJapaneseTokenizer, BertForMaskedLM
# Model path
def sel_model(pre_model='32d'):
if pre_model == '32d':
sel = '/home/Xu_Zhenyu/cl-tohoku/BERT-base_mecab-ipadic-bpe-32k_do-whole-word-mask/'
elif pre_model == '4d':
sel = '/home/Xu_Zhenyu/cl-tohoku/BERT-base_mecab-ipadic-char-4k_do-whole-word-mask/'
elif pre_model == '32n':
sel = '/home/Xu_Zhenyu/cl-tohoku/BERT-base_mecab-ipadic-bpe-32k_no-whole-word-mask/'
elif pre_model == '4n':
sel = '/home/Xu_Zhenyu/cl-tohoku/BERT-base_mecab-ipadic-char-4k_no-whole-word-mask/'
else:
sel = '/home/Xu_Zhenyu/cl-tohoku/BERT-base_mecab-ipadic-bpe-32k_do-whole-word-mask/'
return sel
# Load pre-trained tokenizer
tokenizer = BertJapaneseTokenizer.from_pretrained(sel_model())
# Tokenize input
text = 'テレビでサッカーの試合を見る。'
tokenized_text = tokenizer.tokenize(text)
# ['テレビ', 'で', 'サッカー', 'の', '試合', 'を', '見る', '。']
# Mask a token that we will try to predict back with `BertForMaskedLM`
masked_index = 2
tokenized_text[masked_index] = '[MASK]'
print(tokenized_text)
# ['テレビ', 'で', '[MASK]', 'の', '試合', 'を', '見る', '。']
# Convert token to vocabulary indices
indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text)
# [571, 12, 4, 5, 608, 11, 2867, 8]
# Convert inputs to PyTorch tensors
tokens_tensor = torch.tensor([indexed_tokens])
# tensor([[ 571, 12, 4, 5, 608, 11, 2867, 8]])
# Load pre-trained model
model = BertForMaskedLM.from_pretrained(sel_model())
model.eval()
# Predict
with torch.no_grad():
outputs = model(tokens_tensor)
predictions = outputs[0][0, masked_index].topk(5) # 予測結果の上位5件を抽出
# Show results
for i, index_t in enumerate(predictions.indices):
index = index_t.item()
token = tokenizer.convert_ids_to_tokens([index])[0]
print(i, token)
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10921/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10921/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10920 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10920/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10920/comments | https://api.github.com/repos/huggingface/transformers/issues/10920/events | https://github.com/huggingface/transformers/pull/10920 | 841,637,670 | MDExOlB1bGxSZXF1ZXN0NjAxMzQ0OTg2 | 10,920 | Rename NLP library to Datasets library | {
"login": "tomy0000000",
"id": 23290356,
"node_id": "MDQ6VXNlcjIzMjkwMzU2",
"avatar_url": "https://avatars.githubusercontent.com/u/23290356?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tomy0000000",
"html_url": "https://github.com/tomy0000000",
"followers_url": "https://api.github.com/users/tomy0000000/followers",
"following_url": "https://api.github.com/users/tomy0000000/following{/other_user}",
"gists_url": "https://api.github.com/users/tomy0000000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tomy0000000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tomy0000000/subscriptions",
"organizations_url": "https://api.github.com/users/tomy0000000/orgs",
"repos_url": "https://api.github.com/users/tomy0000000/repos",
"events_url": "https://api.github.com/users/tomy0000000/events{/privacy}",
"received_events_url": "https://api.github.com/users/tomy0000000/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@sgugger Please review"
] | 1,616 | 1,616 | 1,616 | CONTRIBUTOR | null | # What does this PR do?
Fixes #10897
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10920/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10920/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10920",
"html_url": "https://github.com/huggingface/transformers/pull/10920",
"diff_url": "https://github.com/huggingface/transformers/pull/10920.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10920.patch",
"merged_at": 1616760479000
} |
https://api.github.com/repos/huggingface/transformers/issues/10919 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10919/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10919/comments | https://api.github.com/repos/huggingface/transformers/issues/10919/events | https://github.com/huggingface/transformers/issues/10919 | 841,621,101 | MDU6SXNzdWU4NDE2MjExMDE= | 10,919 | GPT2 on TPU, training is so slow. | {
"login": "enkhjile",
"id": 29907488,
"node_id": "MDQ6VXNlcjI5OTA3NDg4",
"avatar_url": "https://avatars.githubusercontent.com/u/29907488?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enkhjile",
"html_url": "https://github.com/enkhjile",
"followers_url": "https://api.github.com/users/enkhjile/followers",
"following_url": "https://api.github.com/users/enkhjile/following{/other_user}",
"gists_url": "https://api.github.com/users/enkhjile/gists{/gist_id}",
"starred_url": "https://api.github.com/users/enkhjile/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/enkhjile/subscriptions",
"organizations_url": "https://api.github.com/users/enkhjile/orgs",
"repos_url": "https://api.github.com/users/enkhjile/repos",
"events_url": "https://api.github.com/users/enkhjile/events{/privacy}",
"received_events_url": "https://api.github.com/users/enkhjile/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,616 | 1,620 | 1,620 | NONE | null | When training GPT2 on TPU from scratch, training loss is constant & evaluation loss is decreasing very small amount.
> [INFO|trainer.py:1776] 2021-03-26 04:06:22,551 >> Num examples = 100000
> [INFO|trainer.py:1777] 2021-03-26 04:06:22,551 >> Batch size = 2
> {'eval_loss': 4.687133312225342, 'eval_runtime': 736.3302, 'eval_samples_per_second': 135.809, 'epoch': 0.05}
> 1%|# | 22000/2080235 [22:38:08<2499:54:52, 4.37s/it] [INFO|trainer.py:1528] 2021-03-26 04:18:38,885 >> Saving model checkpoint to outputs/line_by_line/checkpoint-22000
> [INFO|configuration_utils.py:314] 2021-03-26 04:18:38,912 >> Configuration saved in outputs/line_by_line/checkpoint-22000/config.json
> [INFO|modeling_utils.py:837] 2021-03-26 04:18:56,125 >> Model weights saved in outputs/line_by_line/checkpoint-22000/pytorch_model.bin
> [INFO|tokenization_utils_base.py:1896] 2021-03-26 04:18:56,130 >> tokenizer config file saved in outputs/line_by_line/checkpoint-22000/tokenizer_config.json
> [INFO|tokenization_utils_base.py:1902] 2021-03-26 04:18:56,131 >> Special tokens file saved in outputs/line_by_line/checkpoint-22000/special_tokens_map.json
> {'loss': 2.56, 'learning_rate': 0.0004963706023598295, 'epoch': 0.05}
> {'loss': 2.56, 'learning_rate': 0.0004963465666138682, 'epoch': 0.05}
> {'loss': 2.56, 'learning_rate': 0.0004963225308679067, 'epoch': 0.05}
> {'loss': 2.56, 'learning_rate': 0.0004962984951219453, 'epoch': 0.05}
> {'loss': 2.56, 'learning_rate': 0.000496274459375984, 'epoch': 0.05}
> {'loss': 2.56, 'learning_rate': 0.0004962504236300226, 'epoch': 0.05}
> {'loss': 2.56, 'learning_rate': 0.0004962263878840611, 'epoch': 0.05}
> {'loss': 2.56, 'learning_rate': 0.0004962023521380998, 'epoch': 0.05}
> {'loss': 2.56, 'learning_rate': 0.0004961783163921384, 'epoch': 0.06}
> {'loss': 2.56, 'learning_rate': 0.000496154280646177, 'epoch': 0.06}
> 1%|#1 | 23000/2080235 [23:52:03<2524:57:42, 4.42s/it][INFO|trainer.py:1775] 2021-03-26 05:32:34,207 >> ***** Running Evaluation *****
> [INFO|trainer.py:1776] 2021-03-26 05:32:34,317 >> Num examples = 100000
> [INFO|trainer.py:1777] 2021-03-26 05:32:34,317 >> Batch size = 2
> {'eval_loss': 4.667241096496582, 'eval_runtime': 739.6907, 'eval_samples_per_second': 135.192, 'epoch': 0.06}`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10919/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10919/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10918 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10918/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10918/comments | https://api.github.com/repos/huggingface/transformers/issues/10918/events | https://github.com/huggingface/transformers/issues/10918 | 841,597,170 | MDU6SXNzdWU4NDE1OTcxNzA= | 10,918 | OSError: file bert-base-uncased/config.json not found | {
"login": "pkuzengqi",
"id": 19232605,
"node_id": "MDQ6VXNlcjE5MjMyNjA1",
"avatar_url": "https://avatars.githubusercontent.com/u/19232605?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pkuzengqi",
"html_url": "https://github.com/pkuzengqi",
"followers_url": "https://api.github.com/users/pkuzengqi/followers",
"following_url": "https://api.github.com/users/pkuzengqi/following{/other_user}",
"gists_url": "https://api.github.com/users/pkuzengqi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pkuzengqi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pkuzengqi/subscriptions",
"organizations_url": "https://api.github.com/users/pkuzengqi/orgs",
"repos_url": "https://api.github.com/users/pkuzengqi/repos",
"events_url": "https://api.github.com/users/pkuzengqi/events{/privacy}",
"received_events_url": "https://api.github.com/users/pkuzengqi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I'm also facing the same issue. Did you find any fix yet . ??",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"I'm also facing the same issue. Did you guys find any fix yet . ??",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Same problem here, please write if someone found a valid solution.",
"Facing same error",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Hi, I've had the same error but with `roberta-base`. It appeared that I had an empty folder named `roberta-base` in my working directory. Removing it solved the issue.",
"I found this issue is caused by setting cache directory using checkpoint name\r\nTrainingArguments(checkpoint,evaluation_strategy='steps')\r\n\r\nchange checkpoint to something else resolve the issue\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"> #353\r\n\r\nGot the same issue, thanks for reporting it here. Was able to fix it following after going through your comment.",
"> Hi, I've had the same error but with `roberta-base`. It appeared that I had an empty folder named `roberta-base` in my working directory. Removing it solved the issue.\r\n\r\nYou are literally an angel."
] | 1,616 | 1,705 | 1,629 | NONE | null | ## Environment info
- `transformers` version: 4.4.2
- Python version: 3.6
- PyTorch version (GPU?): 1.8.0 (Tesla V100)
## Information
The problem arises when using:
```
from transformers import BertModel
model = BertModel.from_pretrained('bert-base-uncased')
```
Error Info (Some personal info has been replaced by ---)
```
file bert-base-uncased/config.json not found
Traceback (most recent call last):
File "---/anaconda3/envs/attn/lib/python3.6/site-packages/transformers-4.2.2-py3.8.egg/transformers/configuration_utils.py", line 420, in get_config_dict
File "---/anaconda3/envs/attn/lib/python3.6/site-packages/transformers-4.2.2-py3.8.egg/transformers/file_utils.py", line 1063, in cached_path
OSError: file bert-base-uncased/config.json not found
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "---.py", line 107, in <module>
from_pretrained_input()
File "---.py", line 96, in from_pretrained_input
model = BertModel.from_pretrained('bert-base-uncased')
File "---/anaconda3/envs/attn/lib/python3.6/site-packages/transformers-4.2.2-py3.8.egg/transformers/modeling_utils.py", line 962, in from_pretrained
File "---/anaconda3/envs/attn/lib/python3.6/site-packages/transformers-4.2.2-py3.8.egg/transformers/configuration_utils.py", line 372, in from_pretrained
File "---/anaconda3/envs/attn/lib/python3.6/site-packages/transformers-4.2.2-py3.8.egg/transformers/configuration_utils.py", line 432, in get_config_dict
OSError: Can't load config for 'bert-base-uncased'. Make sure that:
- 'bert-base-uncased' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'bert-base-uncased' is the correct path to a directory containing a config.json file
```
#### what I have read:
https://github.com/huggingface/transformers/issues/353
#### what I have tried:
1. loading from a downloaded model file works well
```
wget https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased.tar.gz
```
unzip the file and rename ```bert_config.json``` as ```config.json```, then
```
model = BertModel.from_pretrained(BERT_BASE_UNCASED_CACHE)
```
2. enough disk space, enough memory, free GPU
3. open internet connection, no proxy
4.
```
import pytorch_pretrained_bert as ppb
assert 'bert-large-cased' in ppb.modeling.PRETRAINED_MODEL_ARCHIVE_MAP
```
5. The following models work well
```
model = BertModel.from_pretrained('bert-base-cased')
model = RobertaModel.from_pretrained('roberta-base')
```
6. working well in server cmd but not in local pycharm (remote deployment to server)
Observation:
- Pycharm can found the ```transfromers``` installed with pip, but that will trigger this problem
- Pycharm cannot find the current ```transformers``` installed with conda
```conda install transformers=4.4 -n env -c huggingface```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10918/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10918/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10917 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10917/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10917/comments | https://api.github.com/repos/huggingface/transformers/issues/10917/events | https://github.com/huggingface/transformers/issues/10917 | 841,556,861 | MDU6SXNzdWU4NDE1NTY4NjE= | 10,917 | longformer speed compared to bert model | {
"login": "gkim89",
"id": 80439799,
"node_id": "MDQ6VXNlcjgwNDM5Nzk5",
"avatar_url": "https://avatars.githubusercontent.com/u/80439799?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gkim89",
"html_url": "https://github.com/gkim89",
"followers_url": "https://api.github.com/users/gkim89/followers",
"following_url": "https://api.github.com/users/gkim89/following{/other_user}",
"gists_url": "https://api.github.com/users/gkim89/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gkim89/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gkim89/subscriptions",
"organizations_url": "https://api.github.com/users/gkim89/orgs",
"repos_url": "https://api.github.com/users/gkim89/repos",
"events_url": "https://api.github.com/users/gkim89/events{/privacy}",
"received_events_url": "https://api.github.com/users/gkim89/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi,\r\n\r\nIs it possible to ask questions related to training on the [forum](https://discuss.huggingface.co/) rather than here? For example, all questions related to training LongFormer can be found [here](https://discuss.huggingface.co/search?q=longformer).\r\n\r\nThe authors of Transformers like to keep Github issues for bugs/feature requests.\r\n\r\nThank you. ",
"sure. thank you for the quick response"
] | 1,616 | 1,616 | 1,616 | NONE | null | We are trying to use a LongFormer and Bert model for multi-label classification of different documents.
When we use the BERT model (BertForSequenceClassification) with max length 512 (batch size 8) each epoch takes approximately 30 minutes.
When we use LongFormer (LongformerForSequenceClassification with the 'allenai/longformer-base-4096' and gradient_checkpointing=True) with max length 4096 (batch size 1, Gradient Accumulation step 8) each epoch takes approximately 12 hours.
Is this reasonable or are we missing something?
Is there anything that we can try to make the training faster? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10917/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10917/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10916 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10916/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10916/comments | https://api.github.com/repos/huggingface/transformers/issues/10916/events | https://github.com/huggingface/transformers/issues/10916 | 841,510,440 | MDU6SXNzdWU4NDE1MTA0NDA= | 10,916 | AttributeError: 'Trainer' object has no attribute 'log_metrics' | {
"login": "yananchen1989",
"id": 26405281,
"node_id": "MDQ6VXNlcjI2NDA1Mjgx",
"avatar_url": "https://avatars.githubusercontent.com/u/26405281?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yananchen1989",
"html_url": "https://github.com/yananchen1989",
"followers_url": "https://api.github.com/users/yananchen1989/followers",
"following_url": "https://api.github.com/users/yananchen1989/following{/other_user}",
"gists_url": "https://api.github.com/users/yananchen1989/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yananchen1989/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yananchen1989/subscriptions",
"organizations_url": "https://api.github.com/users/yananchen1989/orgs",
"repos_url": "https://api.github.com/users/yananchen1989/repos",
"events_url": "https://api.github.com/users/yananchen1989/events{/privacy}",
"received_events_url": "https://api.github.com/users/yananchen1989/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You should install Transformers from source. See #10446.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,616 | 1,620 | 1,620 | NONE | null | I try to fine tune the distilbert-base-uncased on my own dataset, which are csv made up of each news line by line.
here is my command:
```
nohup python run_mlm.py \
--model_name_or_path distilbert-base-uncased \
--train_file df_finetune_train.csv \
--validation_file df_finetune_test.csv \
--do_train \
--do_eval \
--preprocessing_num_workers 72 \
--output_dir ./finetuned_bert \
--overwrite_cache True \
--max_seq_length 256 \
--line_by_line True > log_fintune_mlm &
```
Here is the error.
> {'loss': 1.7847, 'learning_rate': 3.264263411864888e-07, 'epoch': 2.98}
> {'loss': 1.7906, 'learning_rate': 1.7858832434478192e-07, 'epoch': 2.99}
> {'loss': 1.7839, 'learning_rate': 3.075030750307503e-08, 'epoch': 3.0}
> {'train_runtime': 65966.5445, 'train_samples_per_second': 2.563, 'epoch': 3.0}
> Traceback (most recent call last):
> File "run_mlm.py", line 487, in <module>
> main()
> File "run_mlm.py", line 462, in main
> trainer.log_metrics("train", metrics)
> AttributeError: 'Trainer' object has no attribute 'log_metrics'
transformers version: 4.3.3
torch version: 1.5.0+cu101 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10916/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10916/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10915 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10915/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10915/comments | https://api.github.com/repos/huggingface/transformers/issues/10915/events | https://github.com/huggingface/transformers/pull/10915 | 841,482,592 | MDExOlB1bGxSZXF1ZXN0NjAxMjE2ODQz | 10,915 | Bump pyyaml from 5.3.1 to 5.4 in /examples/research_projects/lxmert | {
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
} | [
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] | closed | false | null | [] | [
"Looks like pyyaml is up-to-date now, so this is no longer needed."
] | 1,616 | 1,616 | 1,616 | CONTRIBUTOR | null | Bumps [pyyaml](https://github.com/yaml/pyyaml) from 5.3.1 to 5.4.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/yaml/pyyaml/blob/master/CHANGES">pyyaml's changelog</a>.</em></p>
<blockquote>
<p>5.4 (2021-01-19)</p>
<ul>
<li><a href="https://github-redirect.dependabot.com/yaml/pyyaml/pull/407">yaml/pyyaml#407</a> -- Build modernization, remove distutils, fix metadata, build wheels, CI to GHA</li>
<li><a href="https://github-redirect.dependabot.com/yaml/pyyaml/pull/472">yaml/pyyaml#472</a> -- Fix for CVE-2020-14343, moves arbitrary python tags to UnsafeLoader</li>
<li><a href="https://github-redirect.dependabot.com/yaml/pyyaml/pull/441">yaml/pyyaml#441</a> -- Fix memory leak in implicit resolver setup</li>
<li><a href="https://github-redirect.dependabot.com/yaml/pyyaml/pull/392">yaml/pyyaml#392</a> -- Fix py2 copy support for timezone objects</li>
<li><a href="https://github-redirect.dependabot.com/yaml/pyyaml/pull/378">yaml/pyyaml#378</a> -- Fix compatibility with Jython</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/yaml/pyyaml/commit/58d0cb7ee09954c67fabfbd714c5673b03e7a9e1"><code>58d0cb7</code></a> 5.4 release</li>
<li><a href="https://github.com/yaml/pyyaml/commit/a60f7a19c0b418fe95fcf2ec0957005ae39e1090"><code>a60f7a1</code></a> Fix compatibility with Jython</li>
<li><a href="https://github.com/yaml/pyyaml/commit/ee98abd7d7bd2ca9c7b98aa19164fd0306a3f3d2"><code>ee98abd</code></a> Run CI on PR base branch changes</li>
<li><a href="https://github.com/yaml/pyyaml/commit/ddf20330be1fae8813b8ce1789c48f244746d252"><code>ddf2033</code></a> constructor.timezone: _<em>copy</em> & <strong>deepcopy</strong></li>
<li><a href="https://github.com/yaml/pyyaml/commit/fc914d52c43f499224f7fb4c2d4c47623adc5b33"><code>fc914d5</code></a> Avoid repeatedly appending to yaml_implicit_resolvers</li>
<li><a href="https://github.com/yaml/pyyaml/commit/a001f2782501ad2d24986959f0239a354675f9dc"><code>a001f27</code></a> Fix for CVE-2020-14343</li>
<li><a href="https://github.com/yaml/pyyaml/commit/fe150624146ee631bb0f95e45731e8b01281fed6"><code>fe15062</code></a> Add 3.9 to appveyor file for completeness sake</li>
<li><a href="https://github.com/yaml/pyyaml/commit/1e1c7fb7c09e9149967c208a6fd07276a6140d57"><code>1e1c7fb</code></a> Add a newline character to end of pyproject.toml</li>
<li><a href="https://github.com/yaml/pyyaml/commit/0b6b7d61719fbe0a11f0980489f1bf8ce746c164"><code>0b6b7d6</code></a> Start sentences and phrases for capital letters</li>
<li><a href="https://github.com/yaml/pyyaml/commit/c97691596eec279ef9191a9b3bba583a17139d5a"><code>c976915</code></a> Shell code improvements</li>
<li>Additional commits viewable in <a href="https://github.com/yaml/pyyaml/compare/5.3.1...5.4">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10915/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10915/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10915",
"html_url": "https://github.com/huggingface/transformers/pull/10915",
"diff_url": "https://github.com/huggingface/transformers/pull/10915.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10915.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/10914 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10914/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10914/comments | https://api.github.com/repos/huggingface/transformers/issues/10914/events | https://github.com/huggingface/transformers/pull/10914 | 841,431,754 | MDExOlB1bGxSZXF1ZXN0NjAxMTcwNDMy | 10,914 | [vulnerability] fix dependency | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,616 | 1,616 | CONTRIBUTOR | null | this PR fixes https://github.com/huggingface/transformers/security/dependabot/examples/research_projects/lxmert/requirements.txt/PyYAML/open
@LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10914/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10914/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10914",
"html_url": "https://github.com/huggingface/transformers/pull/10914",
"diff_url": "https://github.com/huggingface/transformers/pull/10914.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10914.patch",
"merged_at": 1616763972000
} |
https://api.github.com/repos/huggingface/transformers/issues/10913 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10913/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10913/comments | https://api.github.com/repos/huggingface/transformers/issues/10913/events | https://github.com/huggingface/transformers/issues/10913 | 841,429,738 | MDU6SXNzdWU4NDE0Mjk3Mzg= | 10,913 | pegasus xsum won't train on xsum dataset | {
"login": "tomlinsonm",
"id": 7818956,
"node_id": "MDQ6VXNlcjc4MTg5NTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7818956?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tomlinsonm",
"html_url": "https://github.com/tomlinsonm",
"followers_url": "https://api.github.com/users/tomlinsonm/followers",
"following_url": "https://api.github.com/users/tomlinsonm/following{/other_user}",
"gists_url": "https://api.github.com/users/tomlinsonm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tomlinsonm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tomlinsonm/subscriptions",
"organizations_url": "https://api.github.com/users/tomlinsonm/orgs",
"repos_url": "https://api.github.com/users/tomlinsonm/repos",
"events_url": "https://api.github.com/users/tomlinsonm/events{/privacy}",
"received_events_url": "https://api.github.com/users/tomlinsonm/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,616 | 1,620 | 1,620 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 5.0dev
- Platform: linux
- Python version: 3.6.9
- PyTorch version (GPU?): pip install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html
- Tensorflow version (GPU?): 2.4.1
- Using GPU in script?: Both -
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): Pegasus XSUM
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
I am using run_summarization.py to retrain a fine-tuned model (before I try it on my own data). I first fine-tuned on gigaword for a few thousand iterations, tested it on gigaword, then switched to evaluate on the xsum dataset. The xsum eval dataset produces the following error on the CPU (similar error on GPU, just with a lot of extra fluff)
```
File "run_summarization.py", line 593, in <module>
main()
File "run_summarization.py", line 550, in main
max_length=data_args.val_max_target_length, num_beams=data_args.num_beams, metric_key_prefix="eval"
File "/disk1/work/marc/pegasus/venv/lib/python3.6/site-packages/transformers/trainer_seq2seq.py", line 74, in evaluate
return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)
File "/disk1/work/marc/pegasus/venv/lib/python3.6/site-packages/transformers/trainer.py", line 1707, in evaluate
metric_key_prefix=metric_key_prefix,
File "/disk1/work/marc/pegasus/venv/lib/python3.6/site-packages/transformers/trainer.py", line 1838, in prediction_loop
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
File "/disk1/work/marc/pegasus/venv/lib/python3.6/site-packages/transformers/trainer_seq2seq.py", line 167, in prediction_step
**gen_kwargs,
File "/disk1/work/marc/pegasus/venv/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/disk1/work/marc/pegasus/venv/lib/python3.6/site-packages/transformers/generation_utils.py", line 927, in generate
model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(input_ids, model_kwargs)
File "/disk1/work/marc/pegasus/venv/lib/python3.6/site-packages/transformers/generation_utils.py", line 412, in _prepare_encoder_decoder_kwargs_for_generation
model_kwargs["encoder_outputs"]: ModelOutput = encoder(input_ids, return_dict=True, **encoder_kwargs)
File "/disk1/work/marc/pegasus/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/disk1/work/marc/pegasus/venv/lib/python3.6/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 725, in forward
embed_pos = self.embed_positions(input_shape)
File "/disk1/work/marc/pegasus/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/disk1/work/marc/pegasus/venv/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/disk1/work/marc/pegasus/venv/lib/python3.6/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 139, in forward
return super().forward(positions)
File "/disk1/work/marc/pegasus/venv/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 147, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/disk1/work/marc/pegasus/venv/lib/python3.6/site-packages/torch/nn/functional.py", line 1913, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10913/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10913/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10912 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10912/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10912/comments | https://api.github.com/repos/huggingface/transformers/issues/10912/events | https://github.com/huggingface/transformers/issues/10912 | 841,332,542 | MDU6SXNzdWU4NDEzMzI1NDI= | 10,912 | Summarization length not controlled by max_length, min_length | {
"login": "xiaohy9",
"id": 75334329,
"node_id": "MDQ6VXNlcjc1MzM0MzI5",
"avatar_url": "https://avatars.githubusercontent.com/u/75334329?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiaohy9",
"html_url": "https://github.com/xiaohy9",
"followers_url": "https://api.github.com/users/xiaohy9/followers",
"following_url": "https://api.github.com/users/xiaohy9/following{/other_user}",
"gists_url": "https://api.github.com/users/xiaohy9/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xiaohy9/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiaohy9/subscriptions",
"organizations_url": "https://api.github.com/users/xiaohy9/orgs",
"repos_url": "https://api.github.com/users/xiaohy9/repos",
"events_url": "https://api.github.com/users/xiaohy9/events{/privacy}",
"received_events_url": "https://api.github.com/users/xiaohy9/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | open | false | null | [] | [
"The `max_length` and `min_length` are in terms of tokens, not words. As some words consist of multiple tokens, this results in fewer words to be generated than you might expect. ",
"@NielsRogge\r\nThanks for the answer. It makes sense. But when are words consist of multiple tokens, can you give me some examples?\r\n\r\nAlso, would it be better for arguments (max_length, min_length) refer to number of words instead of tokens as to better control the outputs, which are natural language for human?",
"Running into a similar issue when using `generator = pipeline('text-generation', model='EleutherAI/gpt-neo-2.7B')` . I can get better control when using `min_length=..,max_length=..` but I have no ultimate control when e.g. querying for `Below is the code for a react app with a blue button that says 'click me'`\r\n\r\n\r\n```\r\n{'generated_text': \"Below is the code for a react app with a blue button that says 'click me' that is to be used by react-router. \\nimport React, { Component } from 'react';\\n\\nimport { Link } from 'react\"}]\r\n```\r\n\r\n\r\nMy result is \"cut off\" and I would be very happy to set a desired length of resulting words.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Stalebots are so much an anti-quality thing :-/",
"> Running into a similar issue when using `generator = pipeline('text-generation', model='EleutherAI/gpt-neo-2.7B')` . I can get better control when using `min_length=..,max_length=..` but I have no ultimate control when e.g. querying for `Below is the code for a react app with a blue button that says 'click me'`\r\n> \r\n> ```\r\n> {'generated_text': \"Below is the code for a react app with a blue button that says 'click me' that is to be used by react-router. \\nimport React, { Component } from 'react';\\n\\nimport { Link } from 'react\"}]\r\n> ```\r\n> \r\n> My result is \"cut off\" and I would be very happy to set a desired length of resulting words.\r\n\r\nSame issue for me, anyone found a solution regarding this? ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"***Stalebots are so much an anti-quality measure and have not been fixed***",
"cc @patil-suraj @patrickvonplaten ",
"@chris-aeviator - do you want to have exactly `max_length` words? In this case you have to disable the eos_token_id => you should be able to just do `model.generate(...., eos_token_id=None)`"
] | 1,616 | 1,624 | null | NONE | null | I am using the pertained ctrlsum-cnndm model from transformers. I noticed that summarization text length is not exactly controlled by max_length, min_length arguments of model.generate(). Not sure why. It appears that empty spaces are included, but not sure. Please help. Thanks.
```
text1="The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct."
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("hyunwoongko/ctrlsum-cnndm")
model = AutoModelForSeq2SeqLM.from_pretrained("hyunwoongko/ctrlsum-cnndm")
inputs = tokenizer.encode(text1, return_tensors="pt", max_length=1024)#16
outputs = model.generate(inputs, max_length=100, min_length=50, num_beams=5, early_stopping=True)
print(tokenizer.decode(outputs[0]))
```
Results:
max_length=100, min_length=50, actually 36 words
`</s> The Eiffel Tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building. It is the tallest structure in Paris and the second tallest free-standing structure in France after the Millau Viaduct.</s>
`
max_length=200, min_length=100, actually 83 words
`</s> The Eiffel Tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. It was the tallest man-made structure in the world for 41 years until the Chrysler Building in New York City was finished in 1930. It is the second tallest free-standing structure in France after the Millau Viaduct, which measures 125 metres (410 ft) on each side. The tower is now taller than the Chrysler building by 5.2 metres (17 ft)</s>
` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10912/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10912/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10911 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10911/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10911/comments | https://api.github.com/repos/huggingface/transformers/issues/10911/events | https://github.com/huggingface/transformers/pull/10911 | 841,303,898 | MDExOlB1bGxSZXF1ZXN0NjAxMDU1NDg3 | 10,911 | Add nvidia megatron models | {
"login": "jdemouth",
"id": 1792006,
"node_id": "MDQ6VXNlcjE3OTIwMDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1792006?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jdemouth",
"html_url": "https://github.com/jdemouth",
"followers_url": "https://api.github.com/users/jdemouth/followers",
"following_url": "https://api.github.com/users/jdemouth/following{/other_user}",
"gists_url": "https://api.github.com/users/jdemouth/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jdemouth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jdemouth/subscriptions",
"organizations_url": "https://api.github.com/users/jdemouth/orgs",
"repos_url": "https://api.github.com/users/jdemouth/repos",
"events_url": "https://api.github.com/users/jdemouth/events{/privacy}",
"received_events_url": "https://api.github.com/users/jdemouth/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"@LysandreJik - you'll see that the last test (marked 'slow') in tests/test_modeling_megatron_bert.py points to a checkpoint in examples/megatron-models (downloaded following the instructions described in examples/megatron-models/README.md). I was not sure how to deal with that so suggestions are welcome (for other items too ;)).",
"We have a few failing tests, let me break them down for you:\r\n\r\n### build_doc\r\n\r\nThe build_doc is failing because of the following errors:\r\n```\r\n/home/circleci/transformers/docs/source/model_doc/megatron_bert.rst:document isn't included in any toctree\r\n```\r\n\r\nThe `megatron_bert.rst` should be defined in the index of the docs :)\r\n\r\n### check_code_quality\r\n\r\nThe error is:\r\n```\r\n2 files would be reformatted, 783 files would be left unchanged.\r\n```\r\n\r\nFor this, you should install the quality tools: `pip install -e .[quality]` (from the root of the repo)\r\nand run the following:\r\n```\r\nmake fixup\r\n```\r\n\r\nThis is going to fix some files, and tell you if there are errors it cannot resolve. If there are some, it should tell you how to fix them.\r\n\r\n### run_test_flax, run_tests_tf, and run_tests_pipelines_tf\r\n\r\nThis is due to the following error:\r\n\r\n```\r\n____________ ERROR collecting tests/test_modeling_megatron_bert.py _____________\r\ntests/test_modeling_megatron_bert.py:256: in <module>\r\n class MegatronBertModelTest(ModelTesterMixin, unittest.TestCase):\r\ntests/test_modeling_megatron_bert.py:259: in MegatronBertModelTest\r\n MegatronBertModel,\r\nE NameError: name 'MegatronBertModel' is not defined\r\n```\r\n\r\nI think this comes from a missing `@require_torch` decorator on one of your tests. This decorator tells the suite that this test requires torch, and to not run this test if torch is not found as a dependency inside the environment.\r\n\r\nIf that's not it, then it may be that it's missing a dummy object. Running `make fix-copies` should fix this, but you should already have run this if you have done the fix mentioned above relative to the style/code quality."
] | 1,616 | 1,617 | 1,617 | CONTRIBUTOR | null | # What does this PR do?
Add the megatron_gpt2 model. That model reuses the existing GPT2 model. This
commit includes a script to convert a Megatron-GPT2 checkpoint downloaded
from NVIDIA GPU Cloud. See examples/megatron-models/README.md for details.
Add the megatron_bert model. That model is implemented as a modification of
the existing BERT model in Transformers. This commit includes a script to
convert a Megatron-BERT checkpoint downloaded from NVIDIA GPU Cloud. See
examples/megatron-models/README.md for details.
@LysandreJik
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10911/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10911/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10911",
"html_url": "https://github.com/huggingface/transformers/pull/10911",
"diff_url": "https://github.com/huggingface/transformers/pull/10911.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10911.patch",
"merged_at": 1617905352000
} |
https://api.github.com/repos/huggingface/transformers/issues/10910 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10910/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10910/comments | https://api.github.com/repos/huggingface/transformers/issues/10910/events | https://github.com/huggingface/transformers/pull/10910 | 841,282,894 | MDExOlB1bGxSZXF1ZXN0NjAxMDM3NzM0 | 10,910 | Wav2Vec2 CommonVoice training - Save the processor before training starts | {
"login": "Nithin-Holla",
"id": 19574344,
"node_id": "MDQ6VXNlcjE5NTc0MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/19574344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nithin-Holla",
"html_url": "https://github.com/Nithin-Holla",
"followers_url": "https://api.github.com/users/Nithin-Holla/followers",
"following_url": "https://api.github.com/users/Nithin-Holla/following{/other_user}",
"gists_url": "https://api.github.com/users/Nithin-Holla/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Nithin-Holla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nithin-Holla/subscriptions",
"organizations_url": "https://api.github.com/users/Nithin-Holla/orgs",
"repos_url": "https://api.github.com/users/Nithin-Holla/repos",
"events_url": "https://api.github.com/users/Nithin-Holla/events{/privacy}",
"received_events_url": "https://api.github.com/users/Nithin-Holla/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,618 | 1,618 | CONTRIBUTOR | null | # What does this PR do?
Currently, the Wav2Vec2 processor is saved at the end of training. However, the vocabulary is non-deterministic and varies between runs. Thus, if the training is killed before it's done, the processor is not saved, meaning that the checkpoints do not contain the processor configuration files, making them unusable for resuming training or for evaluating on the checkpoint. Hence, this PR saves the processor before the training begins.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten @patil-suraj
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10910/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10910/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10910",
"html_url": "https://github.com/huggingface/transformers/pull/10910",
"diff_url": "https://github.com/huggingface/transformers/pull/10910.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10910.patch",
"merged_at": 1618401126000
} |
https://api.github.com/repos/huggingface/transformers/issues/10909 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10909/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10909/comments | https://api.github.com/repos/huggingface/transformers/issues/10909/events | https://github.com/huggingface/transformers/issues/10909 | 841,261,323 | MDU6SXNzdWU4NDEyNjEzMjM= | 10,909 | LengthGroupedSampler slowly iterates over dataset | {
"login": "maxidl",
"id": 22561809,
"node_id": "MDQ6VXNlcjIyNTYxODA5",
"avatar_url": "https://avatars.githubusercontent.com/u/22561809?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maxidl",
"html_url": "https://github.com/maxidl",
"followers_url": "https://api.github.com/users/maxidl/followers",
"following_url": "https://api.github.com/users/maxidl/following{/other_user}",
"gists_url": "https://api.github.com/users/maxidl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maxidl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maxidl/subscriptions",
"organizations_url": "https://api.github.com/users/maxidl/orgs",
"repos_url": "https://api.github.com/users/maxidl/repos",
"events_url": "https://api.github.com/users/maxidl/events{/privacy}",
"received_events_url": "https://api.github.com/users/maxidl/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
] | [
"I think the solution you suggested on the forums (leaving the length computation up to the user in the dataset) is a bit better. Will investigate a bit more.",
"> I think the solution you suggested on the forums (leaving the length computation up to the user in the dataset) is a bit better. Will investigate a bit more.\r\n\r\nI agree, that surely is a good solution if the user is aware of it. It just took us a while to find the source of that delay before training. Maybe we can have some sort of user hint? ",
"Fixed by #10953"
] | 1,616 | 1,617 | 1,617 | NONE | null | https://github.com/huggingface/transformers/blob/86c6f8a8b1f2bfc6c6d175590efc95a5e6facb51/src/transformers/trainer_pt_utils.py#L506
When using training arg "group_by_length", the sampler has to get the length of every input. it does so by iterating over the dataset in a simple for loop, without the use of a dataloader.
In my case, this led to long delays before the training starts, as reading the dataset without the worker parallelization from a dataloader took a long time.
I believe this can potentially improved, although one could also precompute the lengths and pass them to the sampler (but this can only happen if the user is aware of it being a potential bottleneck).
Maybe it is a good idea to add a dataloader + num_workers option here?
Thanks to Pere-Lluís and Pedro for being my co-detectives in spotting this caveat.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10909/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10909/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10908 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10908/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10908/comments | https://api.github.com/repos/huggingface/transformers/issues/10908/events | https://github.com/huggingface/transformers/issues/10908 | 841,221,962 | MDU6SXNzdWU4NDEyMjE5NjI= | 10,908 | Improve the documentation for TrainingArguments.label_names, and if possible raise an error if users misinterpret this attribute like I did | {
"login": "velixo",
"id": 7550072,
"node_id": "MDQ6VXNlcjc1NTAwNzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/7550072?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/velixo",
"html_url": "https://github.com/velixo",
"followers_url": "https://api.github.com/users/velixo/followers",
"following_url": "https://api.github.com/users/velixo/following{/other_user}",
"gists_url": "https://api.github.com/users/velixo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/velixo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/velixo/subscriptions",
"organizations_url": "https://api.github.com/users/velixo/orgs",
"repos_url": "https://api.github.com/users/velixo/repos",
"events_url": "https://api.github.com/users/velixo/events{/privacy}",
"received_events_url": "https://api.github.com/users/velixo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"### Update:\r\n\r\n**I was wrong, my error is originally coming from badly assigned TrainingArguments.label_names. However, I strongly recommend fixes.**\r\n\r\nContinued investigating and realised my error is appearing because I don't understand the attribute `TrainingArguments.label_names`. I thought that it should be a list of strings, where each string is my specified name of a class.\r\n\r\n**Some background on my code/data structure:**\r\n\r\nI'm doing Multi-Class classification on sentences and my training data is an ordered list of a string sentences, and an associated list of classes in string form (i.e. the class's name). I.e. `sentences[i]` is a sentence sample, and `classes[i]` is the class of that sentence, more specifically the _name_ of that class, a string.\r\n\r\nIf I understand the HuggingFace - Transformers documentation correctly, I should be passing these classes to the Model as the `indices` to a One-Hot encoded list of classes instead. Or in another interpretation, class[i] should be a class number. So in my custom Dataset class, I send my sentences through a Transformers Tokenizer, and I use `MultiLabelBinarizer` from scikit-learn to One-Hot-Encode all my classes, convert it to a tensor, and then call argmax(dim=-1) on the `classes` tensor.\r\n\r\nOf course, I don't want the metrics report to just say \"class 0 has the f1-score X\", so I thought I could pass original class names to Trainer to be used when printing this metrics report. This is what I thought `TrainingArgument.label_names` was for, so I set `TrainingArguments.label_names = MultiLabelBinarizer.classes_`.\r\n\r\n**How my error appears**\r\n\r\nDuring the evaluation stage in `prediction_step()`, I saw with pdb that `inputs` indeed has a `labels` item, which makes sense as I'm trying to evaluate how well the model is performing here.\r\n\r\nI now understand that `prediction_step()` is used both when **predicting** _(i.e. we have no target labels in `inputs` and thus expect to not obtain a loss value)_, and for **evaluating** _(i.e. we **do** have target labels and should be able to get a loss value)_.\r\n\r\nAnd of course, this is what the `has_labels` variable is used for - indicating the presence of target labels in `input`, and thus that `prediction_step()` should be able to get a loss value using `Trainer.compute_loss`. However if `has_labels=False`, `prediction_step()` assumes that `outputs` variable **will not** have the item `loss`, and so we do not need to ignore this key when converting the `outputs` dict/SequenceClassificationOutput.\r\n\r\n\r\nHowever, since I apparently specified `TrainingArguments.label_names` incorrectly, `has_labels` becomes False _when it shouldn't be_, and everything gets messed up. `prediction_step()` thus assumes that we're predicting and doesn't filter out `loss` in outputs, which leads to the error described in my first post.\r\n\r\nI still don't understand what `TrainingArguments.label_names` is or should be.\r\nI recommend that two things should be done:\r\n\r\n- Improve the documentation regarding `TrainingArguments.label_names`, specifying the expected format, etc. \r\n- Earlier in the `Trainer` class, check that `TrainingArguments.label_names` is reasonably formatted and raise a specific error if it isn't, so the user doesn't recieve the above extremely confusing rabbit hole of an error that I did.. ",
"Ping @sgugger ",
"I'm unsure what you want to improve, the [documentation](https://huggingface.co/transformers/main_classes/trainer.html#trainingarguments) states the following:\r\n\r\n\"\"\"\r\nlabel_names (List[str], optional) –\r\n\r\nThe list of keys in your dictionary of inputs that correspond to the labels.\r\n\r\nWill eventually default to [\"labels\"] except if the model used is one of the XxxForQuestionAnswering in which case it will default to [\"start_positions\", \"end_positions\"].\r\n\"\"\"\r\n\r\nIt's clearly indicated it needs to be a list of strings, that have to be the names of the keys for the labels in your input dictionary. I have no idea what your `MultiLabelBinarizer.classes_` contains since you did not share the code of this class.\r\n\r\nMore generally, please look at the [warning in the Trainer documentation](https://huggingface.co/transformers/main_classes/trainer.html#trainer) since you appear to be using the Trainer with a model that is not a model of the Transformers library.\r\n",
"Oh alright, I didn't see that warning. Thank you!\r\n\r\nThe `MultiLabelBinarizer` from scikit-learn transforms list of class/label strings into a matrix, where each row is a one-hot-encoded version of the label. `MultiLabelBinarizer.classes_` returns the list of all class/label names detected in the original class list, with same ordering as the one-hot-encoded version.\r\n\r\nIt sounds like I understood `TrainingArguments.label_names` correctly then, but that my usage of a custom model is messing up the behaviour somehow. Are there any tips/strategies to fix these strange behaviours? Should I just override `prediction_step` and try to fix how has_labels is being assigned?",
"The easiest way to have your custom model work with `Trainer` with no strange behavior is to subclass `PreTrainedModel` (for instance by copying the `XXXForSequenceClassification` and tweaking it to your needs). Otherwise, subclassing and overriding the `prediction_step` method is the most straightforward path.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"I ran into exactly the same issue today.\r\nI was also thinking that the parameter `label_names` in `TrainingArguments` refers to `data[\"train\"].features[\"label\"].names`.\r\nThe error message `IndexError: tuple index out of range` was not helpful at all and I only found the problem by trial and error.\r\n\r\nActually, I was not able to find the description for `label_names` in the [documentation](https://huggingface.co/docs/transformers/v4.14.1/en/main_classes/trainer#transformers.TrainingArguments) but only in the linked source code.\r\n\r\nBesides, I don't even understand what \"The list of keys in your dictionary of inputs that correspond to the labels.\" should mean.\r\n\r\nWhat \"dictionary of inputs\" and what \"list of keys\"?\r\n\r\nMy dataset looks like this\r\n```\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['text', 'label'],\r\n num_rows: 9245\r\n })\r\n test: Dataset({\r\n features: ['text', 'label'],\r\n num_rows: 1028\r\n })\r\n})\r\n```\r\nThe only dictionaries I see is `DatasetDict` with keys \"train\" and \"test\" and each `Dataset` with keys \"features\" and \"num_rows\".\r\n\r\nIt would be really helpful if the description of the parameter `label_names` and the error message could be improved.",
"YES I completely agree. This was very confusing in the documentation. I also interpreted it to mean the list of keys in my label2id dictionary, but it turns out that all it wanted for `label_names` in my case was `['labels']`. That is the name of the column in my input that holds the labels. I hope this helps anyone who was still struggling to understand what \"a list of the names of the keys for the labels in your input dictionary\" means :)"
] | 1,616 | 1,678 | 1,620 | NONE | null | ### Original Issue Title: Possible typo in trainer.py: prediction_step(), forgetting to exclude loss item of outputs dict when assigning logits
_**Update**: I determined the root cause of my error to stem from an incorrect assignment of `TrainingArgument.label_names`. **There is not a typo** in `Trainer.prediction_step()`, as I've suggested below. However there is still an issue: see my comment for elaboration._
I was using the `Trainer` trying to fine-tune [KB-Bert-Base-Swedish-Cased](https://huggingface.co/KB/bert-base-swedish-cased) for multi-class SequenceClassification, when I got a `IndexError: tuple index out of range` during the evaluation stage (I set up `Trainer` to evaluate after each Epoch).
I started PDB and paused at this line in the evaluation phase:
https://github.com/huggingface/transformers/blob/6bc89ed9295443e5a3ee236ad544101752563917/src/transformers/trainer.py#L1805
With the debugger, I saw that `loss=None`, `labels=None`, and `logits` is actually `tuple` with two items. The first item is the prediction loss as, and the second element is the actual output logits from the models forward pass.
I think this strange assignment of the local `logits` variable is coming from here, inside `prediction_step`:
https://github.com/huggingface/transformers/blob/6bc89ed9295443e5a3ee236ad544101752563917/src/transformers/trainer.py#L1933
As the `outputs `dict includes the loss, and "loss" is not in ignore_keys, the loss value in outputs gets baked into `logits`.
I'm pretty sure it's a typo, as when I'm comparing it to a few lines above, (which is executed when has_labels=True), the similar line is:
https://github.com/huggingface/transformers/blob/6bc89ed9295443e5a3ee236ad544101752563917/src/transformers/trainer.py#L1922
The above links are all from Version 4.4.2, but this possible typo is still present in master:
https://github.com/huggingface/transformers/blob/9856c9213dfe9f8355fe00dd6cd0fa1ceae4fa5a/src/transformers/trainer.py#L1966
I haven't been able to read and grasp the code too much, but it looks to me like either we're forgetting to ignore the "loss" key in outputs, or the return statement of `prediction_step` should be somehaw unpacking the logits tuple, so the two variables in "logits" tuple are unpacked into `loss` and` logits`:
https://github.com/huggingface/transformers/blob/6bc89ed9295443e5a3ee236ad544101752563917/src/transformers/trainer.py#L1947
**For clarity, this is the stacktrace of how I encounter the tuple index error from the above typo:**
In the evaluation phase, `prediction_loop` runs over all the batches in my dev dataset. It gets the model output/prediction of each dev batch here:
https://github.com/huggingface/transformers/blob/6bc89ed9295443e5a3ee236ad544101752563917/src/transformers/trainer.py#L1805
Later in` prediction_loop`, we, concatenate each prediction batch with the previous predictions here, calling the function `nested_concat`:
https://github.com/huggingface/transformers/blob/6bc89ed9295443e5a3ee236ad544101752563917/src/transformers/trainer.py#L1810
Inside `nested_concat`, in the line below, `new_tensors` is the above mentioned "logits" tuple.
https://github.com/huggingface/transformers/blob/6bc89ed9295443e5a3ee236ad544101752563917/src/transformers/trainer_pt_utils.py#L95
The above line does a recursive call to `nested_concat`, and we arrive in the line below.
https://github.com/huggingface/transformers/blob/6bc89ed9295443e5a3ee236ad544101752563917/src/transformers/trainer_pt_utils.py#L97
Which calls this:
https://github.com/huggingface/transformers/blob/6bc89ed9295443e5a3ee236ad544101752563917/src/transformers/trainer_pt_utils.py#L58
And I get a index error, as it's trying to index into what is actually the `loss` tensor. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10908/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10908/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10907 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10907/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10907/comments | https://api.github.com/repos/huggingface/transformers/issues/10907/events | https://github.com/huggingface/transformers/issues/10907 | 841,204,042 | MDU6SXNzdWU4NDEyMDQwNDI= | 10,907 | Exception: cannot import name 'Regex' from 'tokenizers' | {
"login": "xiliuhk",
"id": 1370123,
"node_id": "MDQ6VXNlcjEzNzAxMjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1370123?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiliuhk",
"html_url": "https://github.com/xiliuhk",
"followers_url": "https://api.github.com/users/xiliuhk/followers",
"following_url": "https://api.github.com/users/xiliuhk/following{/other_user}",
"gists_url": "https://api.github.com/users/xiliuhk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xiliuhk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiliuhk/subscriptions",
"organizations_url": "https://api.github.com/users/xiliuhk/orgs",
"repos_url": "https://api.github.com/users/xiliuhk/repos",
"events_url": "https://api.github.com/users/xiliuhk/events{/privacy}",
"received_events_url": "https://api.github.com/users/xiliuhk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"same problem.\r\n\r\nEdit: I updated the package \"tokenizers\" to the latest version and it works fine.",
"Hi! Do you have a colab so we can reproduce the issue? Or some commands we can run to obtain the same environment you have and test it out?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,616 | 1,620 | 1,620 | NONE | null | From: site-packages/transformers/convert_slow_tokenizer.py
When first-time call:
from transformers import XLMRobertaTokenizer
tokenizers-0.10.1 transformers-4.4.2 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10907/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10907/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10906 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10906/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10906/comments | https://api.github.com/repos/huggingface/transformers/issues/10906/events | https://github.com/huggingface/transformers/pull/10906 | 841,123,665 | MDExOlB1bGxSZXF1ZXN0NjAwOTAzODg2 | 10,906 | Return global attentions (see #7514) | {
"login": "gui11aume",
"id": 1017195,
"node_id": "MDQ6VXNlcjEwMTcxOTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1017195?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gui11aume",
"html_url": "https://github.com/gui11aume",
"followers_url": "https://api.github.com/users/gui11aume/followers",
"following_url": "https://api.github.com/users/gui11aume/following{/other_user}",
"gists_url": "https://api.github.com/users/gui11aume/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gui11aume/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gui11aume/subscriptions",
"organizations_url": "https://api.github.com/users/gui11aume/orgs",
"repos_url": "https://api.github.com/users/gui11aume/repos",
"events_url": "https://api.github.com/users/gui11aume/events{/privacy}",
"received_events_url": "https://api.github.com/users/gui11aume/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,617 | 1,617 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # 7514 (see discussion of March 22, 2021 with @patrickvonplaten )
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10906/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10906/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10906",
"html_url": "https://github.com/huggingface/transformers/pull/10906",
"diff_url": "https://github.com/huggingface/transformers/pull/10906.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10906.patch",
"merged_at": 1617019223000
} |
https://api.github.com/repos/huggingface/transformers/issues/10905 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10905/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10905/comments | https://api.github.com/repos/huggingface/transformers/issues/10905/events | https://github.com/huggingface/transformers/pull/10905 | 841,102,619 | MDExOlB1bGxSZXF1ZXN0NjAwODg2MTU5 | 10,905 | Add ImageFeatureExtractionMixin | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,616 | 1,616 | COLLABORATOR | null | # What does this PR do?
This PR adds a new `ImageFeatureExtractionMixin` to implement the common functionality needed for images (conversion to PIL Image / NumPy array /normalize/ resize) in a framework agnostic way. While it only adds support for torch (not tf) tensors as input, support for TF tensors is easy to add in the design and will be done when we have a TF model with a vision modality.
Along the way, this PR adds a new `is_vision_available` check (depends only on PIL for now, but we can add other dependencies later on if we fill we need them. It could for instance check for torchvision when torch is installed) and the "dummy" vision objects.
I will work on adding tests tomorrow, but the general design can already be reviewed to check if it has everything needed.
cc @NielsRogge | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10905/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10905/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10905",
"html_url": "https://github.com/huggingface/transformers/pull/10905",
"diff_url": "https://github.com/huggingface/transformers/pull/10905.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10905.patch",
"merged_at": 1616772237000
} |
https://api.github.com/repos/huggingface/transformers/issues/10904 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10904/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10904/comments | https://api.github.com/repos/huggingface/transformers/issues/10904/events | https://github.com/huggingface/transformers/pull/10904 | 841,077,094 | MDExOlB1bGxSZXF1ZXN0NjAwODY0ODE0 | 10,904 | ONNX export: move sample input to same device as model when inferring shapes | {
"login": "severinsimmler",
"id": 16133277,
"node_id": "MDQ6VXNlcjE2MTMzMjc3",
"avatar_url": "https://avatars.githubusercontent.com/u/16133277?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severinsimmler",
"html_url": "https://github.com/severinsimmler",
"followers_url": "https://api.github.com/users/severinsimmler/followers",
"following_url": "https://api.github.com/users/severinsimmler/following{/other_user}",
"gists_url": "https://api.github.com/users/severinsimmler/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severinsimmler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severinsimmler/subscriptions",
"organizations_url": "https://api.github.com/users/severinsimmler/orgs",
"repos_url": "https://api.github.com/users/severinsimmler/repos",
"events_url": "https://api.github.com/users/severinsimmler/events{/privacy}",
"received_events_url": "https://api.github.com/users/severinsimmler/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,616 | 1,619 | 1,619 | CONTRIBUTOR | null | # What does this PR do?
Training a model on a GPU and exporting it afterwards to ONNX has raised a `RuntimeError`, because the model and the [sample input](https://github.com/huggingface/transformers/blob/1a3e0c4fe6868b4eb1105dfe601a79d7e5d11a0f/src/transformers/convert_graph_to_onnx.py#L196) in [`transformers.convert_graph_to_onnx.infer_shapes()`](https://github.com/huggingface/transformers/blob/1a3e0c4fe6868b4eb1105dfe601a79d7e5d11a0f/src/transformers/convert_graph_to_onnx.py#L161-L222) were not on the same device. This PR moves the output to the same device as the model.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests) Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@mfuntowicz (according to `git blame`)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10904/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10904/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10904",
"html_url": "https://github.com/huggingface/transformers/pull/10904",
"diff_url": "https://github.com/huggingface/transformers/pull/10904.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10904.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/10903 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10903/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10903/comments | https://api.github.com/repos/huggingface/transformers/issues/10903/events | https://github.com/huggingface/transformers/pull/10903 | 841,046,562 | MDExOlB1bGxSZXF1ZXN0NjAwODM5MDgw | 10,903 | Add 3D attention mask to T5 model (#9643) | {
"login": "lexhuismans",
"id": 43178421,
"node_id": "MDQ6VXNlcjQzMTc4NDIx",
"avatar_url": "https://avatars.githubusercontent.com/u/43178421?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lexhuismans",
"html_url": "https://github.com/lexhuismans",
"followers_url": "https://api.github.com/users/lexhuismans/followers",
"following_url": "https://api.github.com/users/lexhuismans/following{/other_user}",
"gists_url": "https://api.github.com/users/lexhuismans/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lexhuismans/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lexhuismans/subscriptions",
"organizations_url": "https://api.github.com/users/lexhuismans/orgs",
"repos_url": "https://api.github.com/users/lexhuismans/repos",
"events_url": "https://api.github.com/users/lexhuismans/events{/privacy}",
"received_events_url": "https://api.github.com/users/lexhuismans/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @lexhuismans,\r\n\r\nThanks a lot for your PR! Could you also add a test to verify that T5 can be used with a 3D mask?\r\n",
"Hey @patrickvonplaten,\r\n\r\nThanks for your message. I had a shot in adding a test for the 3D attention mask. The test passed on my device. \r\n\r\nI based the test on a similar test for a default attention mask in test_modeling_bert.py. (Not sure if bert already tests for 3D attention mask?) \r\n\r\nAlso, I did a rebase before pushing which is why there are so many other commits in-between. \r\n\r\nLet me know if something is still missing or incorrect so I can have a look at it. ",
"I made a new PR with just the two commits #11197. "
] | 1,616 | 1,618 | 1,618 | CONTRIBUTOR | null | # What does this PR do?
It allows for 3D attention mask in T5 model (modeling_t5.py).
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #9643
This is a solution for allowing the 3D attention mask in the T5 model by making it broadcastable. It is based on what is used in BERT.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10903/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10903/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10903",
"html_url": "https://github.com/huggingface/transformers/pull/10903",
"diff_url": "https://github.com/huggingface/transformers/pull/10903.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10903.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/10902 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10902/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10902/comments | https://api.github.com/repos/huggingface/transformers/issues/10902/events | https://github.com/huggingface/transformers/pull/10902 | 841,024,243 | MDExOlB1bGxSZXF1ZXN0NjAwODIwMjg0 | 10,902 | Add `examples/run_ner_no_trainer.py` | {
"login": "stancld",
"id": 46073029,
"node_id": "MDQ6VXNlcjQ2MDczMDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/46073029?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stancld",
"html_url": "https://github.com/stancld",
"followers_url": "https://api.github.com/users/stancld/followers",
"following_url": "https://api.github.com/users/stancld/following{/other_user}",
"gists_url": "https://api.github.com/users/stancld/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stancld/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stancld/subscriptions",
"organizations_url": "https://api.github.com/users/stancld/orgs",
"repos_url": "https://api.github.com/users/stancld/repos",
"events_url": "https://api.github.com/users/stancld/events{/privacy}",
"received_events_url": "https://api.github.com/users/stancld/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Ok, added the missing part in Accelerate. In your code, before gathering the labels and predictions, you should pad them across the processes running if `pad_to_max_length` is False:\r\n```python\r\nif not args.pad_to_max_length:\r\n predictions = accelerator.pad_across_processes(predictions, dim=1, pad_index=-100)\r\n labels = accelerator.pad_across_processes(batch[\"labels\"], dim=1, pad_index=-100)\r\n```\r\n\r\nThis should solve the issue when `pad_to_max_length` is left at `False`!",
"Thanks for your comment, @sgugger. I fix the bugs and add documentation to README.",
"Thank you for spotting the mistake with unintentionally deleted `--label_all_tokens` from argarser. I added that argument back.",
"Just checked on TPU for completeness and it runs perfectly fine there, so we're all good!\r\n\r\nThanks for your contribution!!!"
] | 1,616 | 1,617 | 1,617 | CONTRIBUTOR | null | This PR adds an example of token classification tasks (`"ner", "pos", "chunk"`) to show the functionalities of the new `accelerate` library.
<hr>
**Reviewers:** @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10902/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10902/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10902",
"html_url": "https://github.com/huggingface/transformers/pull/10902",
"diff_url": "https://github.com/huggingface/transformers/pull/10902.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10902.patch",
"merged_at": 1617045083000
} |
https://api.github.com/repos/huggingface/transformers/issues/10901 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10901/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10901/comments | https://api.github.com/repos/huggingface/transformers/issues/10901/events | https://github.com/huggingface/transformers/issues/10901 | 840,959,563 | MDU6SXNzdWU4NDA5NTk1NjM= | 10,901 | Error with detecting cached files when running without Internet connection (related to #10067) | {
"login": "aosokin",
"id": 2099291,
"node_id": "MDQ6VXNlcjIwOTkyOTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/2099291?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aosokin",
"html_url": "https://github.com/aosokin",
"followers_url": "https://api.github.com/users/aosokin/followers",
"following_url": "https://api.github.com/users/aosokin/following{/other_user}",
"gists_url": "https://api.github.com/users/aosokin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aosokin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aosokin/subscriptions",
"organizations_url": "https://api.github.com/users/aosokin/orgs",
"repos_url": "https://api.github.com/users/aosokin/repos",
"events_url": "https://api.github.com/users/aosokin/events{/privacy}",
"received_events_url": "https://api.github.com/users/aosokin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Related issue: https://github.com/huggingface/transformers/issues/9147, with proposed fix in https://github.com/huggingface/transformers/pull/9807",
"Why do we need this condition?\r\nhttps://github.com/huggingface/transformers/blob/1a3e0c4fe6868b4eb1105dfe601a79d7e5d11a0f/src/transformers/tokenization_utils_base.py#L1669-L1672\r\nWas introduced here: https://github.com/huggingface/transformers/commit/863e553f75daeaf09aea9cd521ac3a3b3f09e29f Is it needed in any other scenario?\r\nWould it be better to do `unresolved_files.append(file_id)` unconditionally?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,616 | 1,619 | 1,619 | NONE | null | ## Environment info
- `transformers` version: 4.5.0.dev0
- Platform: Linux-3.10.0-957.5.1.el7.x86_64-x86_64-with-centos-7.6.1810-Core
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@LysandreJik (related to #10235 and #10067)
## Information
I'm trying to run
```
from transformers import BertTokenizer
BertTokenizer.from_pretrained("bert-large-uncased-whole-word-masking")
```
from an environment without Internet access. It crashes even though I have all files downloaded and cached. The uncaught exception:
https://github.com/huggingface/transformers/blob/5f1491d3b366d19cc08832d09bcfe007a2643089/src/transformers/file_utils.py#L1347-L1350
When `file_id == 'added_tokens_file'` `file_path` equals https://huggingface.co/bert-large-uncased-whole-word-masking/resolve/main/added_tokens.json which does not exist. (https://github.com/huggingface/transformers/blob/1a3e0c4fe6868b4eb1105dfe601a79d7e5d11a0f/src/transformers/tokenization_utils_base.py#L1653)
This results in line
https://github.com/huggingface/transformers/blob/1a3e0c4fe6868b4eb1105dfe601a79d7e5d11a0f/src/transformers/file_utils.py#L1294 throwing `ConnectTimeout` which is caught in https://github.com/huggingface/transformers/blob/1a3e0c4fe6868b4eb1105dfe601a79d7e5d11a0f/src/transformers/file_utils.py#L1313
and further ignored until another exception in https://github.com/huggingface/transformers/blob/1a3e0c4fe6868b4eb1105dfe601a79d7e5d11a0f/src/transformers/tokenization_utils_base.py#L1672
which is not caught enywhere.
When trying to get the same file with the internet is on the code work differently: line https://github.com/huggingface/transformers/blob/1a3e0c4fe6868b4eb1105dfe601a79d7e5d11a0f/src/transformers/file_utils.py#L1295 throws `requests.exceptions.HTTPError`, which is caught and processed here https://github.com/huggingface/transformers/blob/1a3e0c4fe6868b4eb1105dfe601a79d7e5d11a0f/src/transformers/tokenization_utils_base.py#L1674-L1677
The rest of the code works just fine after `resolved_vocab_files[file_id] = None`
Using `BertTokenizer.from_pretrained(bert_version, local_files_only=True)` works just fine because of this condition:
https://github.com/huggingface/transformers/blob/1a3e0c4fe6868b4eb1105dfe601a79d7e5d11a0f/src/transformers/tokenization_utils_base.py#L1668-L1672
The current workaround is to use `BertTokenizer.from_pretrained(bert_version, local_files_only=True)` but this does not allow to use same code with and without Internet.
## To reproduce
Steps to reproduce the behavior:
Run
```
from transformers import BertTokenizer
BertTokenizer.from_pretrained("bert-large-uncased-whole-word-masking")
```
from env without internet but all the required cache files pre-downloaded.
## Expected behavior
Works exactly as
```
from transformers import BertTokenizer
BertTokenizer.from_pretrained("bert-large-uncased-whole-word-masking", local_files_only=True)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10901/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10901/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10900 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10900/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10900/comments | https://api.github.com/repos/huggingface/transformers/issues/10900/events | https://github.com/huggingface/transformers/issues/10900 | 840,817,633 | MDU6SXNzdWU4NDA4MTc2MzM= | 10,900 | Getting a model to work on a system with no internet access | {
"login": "XapaJIaMnu",
"id": 2027221,
"node_id": "MDQ6VXNlcjIwMjcyMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/2027221?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/XapaJIaMnu",
"html_url": "https://github.com/XapaJIaMnu",
"followers_url": "https://api.github.com/users/XapaJIaMnu/followers",
"following_url": "https://api.github.com/users/XapaJIaMnu/following{/other_user}",
"gists_url": "https://api.github.com/users/XapaJIaMnu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/XapaJIaMnu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/XapaJIaMnu/subscriptions",
"organizations_url": "https://api.github.com/users/XapaJIaMnu/orgs",
"repos_url": "https://api.github.com/users/XapaJIaMnu/repos",
"events_url": "https://api.github.com/users/XapaJIaMnu/events{/privacy}",
"received_events_url": "https://api.github.com/users/XapaJIaMnu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes, if you place all files which you find on the model page on the hub in a directory, then it will work.",
"You can even just `git clone` the model repo",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,616 | 1,619 | 1,619 | NONE | null | ## Environment info
We are trying to use the model from https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment on a system that has no connection to the internet. Normally one can do:
`pipeline = Pipeline(model='model_name")`
and huggingface will fetch everything it needs from the internet. Unfortunately, some datasets reside within a highly protected environment that does not allow for any internet connection and we can't fetch our models. (Hell, we can't even copy/paste errors).
Uploading files to that environment is really cumbersome and every file needs to go through a review process. Through trial and error, i have gotten the model and tokenizer to load, but it is now missing a "vocabulary". Before I got a submit a request for extra files to be uploaded, could someone just confirm for me what files do I need so that the model could be loaded offline. As far as I understand i have to have every single file from here: https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment/tree/main
and my script should look like:
`pipeline = Pipeline(model='/path/to/dir")`
### Who can help
Models:
- pipelines: @LysandreJik
## Information
https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment
The problem arises when using:
Running in a restricted environment
## To reproduce
Use the example script from https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment and try to manually input the files
## Expected behavior
The model should work just as if it were in a system connected to the internet.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10900/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10900/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10899 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10899/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10899/comments | https://api.github.com/repos/huggingface/transformers/issues/10899/events | https://github.com/huggingface/transformers/pull/10899 | 840,759,798 | MDExOlB1bGxSZXF1ZXN0NjAwNTk2MTQz | 10,899 | updates sagemaker documentation | {
"login": "philschmid",
"id": 32632186,
"node_id": "MDQ6VXNlcjMyNjMyMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/philschmid",
"html_url": "https://github.com/philschmid",
"followers_url": "https://api.github.com/users/philschmid/followers",
"following_url": "https://api.github.com/users/philschmid/following{/other_user}",
"gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/philschmid/subscriptions",
"organizations_url": "https://api.github.com/users/philschmid/orgs",
"repos_url": "https://api.github.com/users/philschmid/repos",
"events_url": "https://api.github.com/users/philschmid/events{/privacy}",
"received_events_url": "https://api.github.com/users/philschmid/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,616 | 1,616 | MEMBER | null | # What does this PR do?
Extends **local environment** configuration and adds import to make it more clear. Also, replaced two links. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10899/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10899/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10899",
"html_url": "https://github.com/huggingface/transformers/pull/10899",
"diff_url": "https://github.com/huggingface/transformers/pull/10899.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10899.patch",
"merged_at": 1616677291000
} |
https://api.github.com/repos/huggingface/transformers/issues/10898 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10898/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10898/comments | https://api.github.com/repos/huggingface/transformers/issues/10898/events | https://github.com/huggingface/transformers/pull/10898 | 840,650,991 | MDExOlB1bGxSZXF1ZXN0NjAwNTA2Nzk1 | 10,898 | run_glue_no_trainer: datasets -> raw_datasets | {
"login": "jethrokuan",
"id": 1667473,
"node_id": "MDQ6VXNlcjE2Njc0NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1667473?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jethrokuan",
"html_url": "https://github.com/jethrokuan",
"followers_url": "https://api.github.com/users/jethrokuan/followers",
"following_url": "https://api.github.com/users/jethrokuan/following{/other_user}",
"gists_url": "https://api.github.com/users/jethrokuan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jethrokuan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jethrokuan/subscriptions",
"organizations_url": "https://api.github.com/users/jethrokuan/orgs",
"repos_url": "https://api.github.com/users/jethrokuan/repos",
"events_url": "https://api.github.com/users/jethrokuan/events{/privacy}",
"received_events_url": "https://api.github.com/users/jethrokuan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Creating a quick PR now to alert to some issues I'm encountering, will backfill with proper GitHub issues after I knock off from work.\r\n\r\nOther issues:\r\n- [ ] If task_name is `None`, `metrics` is undefined, and will throw `... metrics used before assignment`"
] | 1,616 | 1,616 | 1,616 | CONTRIBUTOR | null | # What does this PR do?
Use the correct variable (raw_datasets) instead of the module (datasets)
where appropriate. The script will otherwise fail with "module object is not subscriptable".
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10898/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10898/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10898",
"html_url": "https://github.com/huggingface/transformers/pull/10898",
"diff_url": "https://github.com/huggingface/transformers/pull/10898.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10898.patch",
"merged_at": 1616675297000
} |
https://api.github.com/repos/huggingface/transformers/issues/10897 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10897/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10897/comments | https://api.github.com/repos/huggingface/transformers/issues/10897/events | https://github.com/huggingface/transformers/issues/10897 | 840,627,159 | MDU6SXNzdWU4NDA2MjcxNTk= | 10,897 | [doc] Custom datasets page reference dataset library as NLP library | {
"login": "tomy0000000",
"id": 23290356,
"node_id": "MDQ6VXNlcjIzMjkwMzU2",
"avatar_url": "https://avatars.githubusercontent.com/u/23290356?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tomy0000000",
"html_url": "https://github.com/tomy0000000",
"followers_url": "https://api.github.com/users/tomy0000000/followers",
"following_url": "https://api.github.com/users/tomy0000000/following{/other_user}",
"gists_url": "https://api.github.com/users/tomy0000000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tomy0000000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tomy0000000/subscriptions",
"organizations_url": "https://api.github.com/users/tomy0000000/orgs",
"repos_url": "https://api.github.com/users/tomy0000000/repos",
"events_url": "https://api.github.com/users/tomy0000000/events{/privacy}",
"received_events_url": "https://api.github.com/users/tomy0000000/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes, the name should indeed be updated! If you want to do a PR with this, please go ahead!"
] | 1,616 | 1,616 | 1,616 | CONTRIBUTOR | null | ### Who can help
@sgugger
## Information
The page [Fine-tuning with custom datasets](https://huggingface.co/transformers/custom_datasets.html) is referencing dataset library a lot, but in the old name (NLP library), and I've noticed that this still holds true in the [source .rst](https://github.com/huggingface/transformers/blob/master/docs/source/custom_datasets.rst) file.
If it wasn't left unmodified intended, I'm willing to help submitting a PR, just filling the issue to confirm. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10897/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10897/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10896 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10896/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10896/comments | https://api.github.com/repos/huggingface/transformers/issues/10896/events | https://github.com/huggingface/transformers/issues/10896 | 840,596,156 | MDU6SXNzdWU4NDA1OTYxNTY= | 10,896 | save only the best performing checkpoint | {
"login": "yuvalkirstain",
"id": 57996478,
"node_id": "MDQ6VXNlcjU3OTk2NDc4",
"avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuvalkirstain",
"html_url": "https://github.com/yuvalkirstain",
"followers_url": "https://api.github.com/users/yuvalkirstain/followers",
"following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}",
"gists_url": "https://api.github.com/users/yuvalkirstain/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuvalkirstain/subscriptions",
"organizations_url": "https://api.github.com/users/yuvalkirstain/orgs",
"repos_url": "https://api.github.com/users/yuvalkirstain/repos",
"events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuvalkirstain/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
] | [
"The checkpoints are saved to resume training in case you are interrupted, so saving only the best checkpoints wouldn't work with this.\r\n\r\nThe `load_best_model_at_end` functionality already keeps track of the best checkpoint during training and reloads it at the end, I think it should cover what you need.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"So,when I finish training,how can I load the best performing checkpoint ? @sgugger ",
" When I check the `trainer_state.json` file I found these mssage:\r\n```\r\n \"best_metric\": null,\r\n \"best_model_checkpoint\": null,\r\n \"epoch\": 100.0,\r\n \"global_step\": 559300,\r\n \"is_hyper_param_search\": false,\r\n \"is_local_process_zero\": true,\r\n \"is_world_process_zero\": true,\r\n```\r\n\r\nas shown above,\"best_model_checkpoint\" is null.\r\n\r\n",
"If you did not specify `--load_best_model_at_end` for your script, you won't get it automatically."
] | 1,616 | 1,622 | 1,619 | CONTRIBUTOR | null | # 🚀 Feature request
In the Trainer - Enable an option to save only the best performing checkpoints (rather than the newsest)
## Motivation
Usually when we train a model we would like to keep only the best performing checkpoints (on the dev set according to the specified metric) rather than the newest checkpoints.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10896/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10896/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10895 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10895/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10895/comments | https://api.github.com/repos/huggingface/transformers/issues/10895/events | https://github.com/huggingface/transformers/pull/10895 | 840,510,144 | MDExOlB1bGxSZXF1ZXN0NjAwMzg3MDI3 | 10,895 | Add missing global_attentions into the return_dict of Longformer models | {
"login": "joe32140",
"id": 6942982,
"node_id": "MDQ6VXNlcjY5NDI5ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6942982?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joe32140",
"html_url": "https://github.com/joe32140",
"followers_url": "https://api.github.com/users/joe32140/followers",
"following_url": "https://api.github.com/users/joe32140/following{/other_user}",
"gists_url": "https://api.github.com/users/joe32140/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joe32140/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joe32140/subscriptions",
"organizations_url": "https://api.github.com/users/joe32140/orgs",
"repos_url": "https://api.github.com/users/joe32140/repos",
"events_url": "https://api.github.com/users/joe32140/events{/privacy}",
"received_events_url": "https://api.github.com/users/joe32140/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @joe32140,\r\n\r\nSorry, I think this PR already fixes the problem: https://github.com/huggingface/transformers/pull/10906/files"
] | 1,616 | 1,617 | 1,617 | NONE | null | The `global_attentions` is missing in the return_dict of `LongformerForSequenceClassification`, `LongformerForMaskedLM`, and `LongformerForTokenClassification` classes in `modeling_longformer.py`.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10895/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10895/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10895",
"html_url": "https://github.com/huggingface/transformers/pull/10895",
"diff_url": "https://github.com/huggingface/transformers/pull/10895.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10895.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/10894 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10894/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10894/comments | https://api.github.com/repos/huggingface/transformers/issues/10894/events | https://github.com/huggingface/transformers/issues/10894 | 840,459,883 | MDU6SXNzdWU4NDA0NTk4ODM= | 10,894 | Invalid argument: Incompatible shapes: [24,1536,12,514] vs. [24,1536,12,513] | {
"login": "user06039",
"id": 58213113,
"node_id": "MDQ6VXNlcjU4MjEzMTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/58213113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/user06039",
"html_url": "https://github.com/user06039",
"followers_url": "https://api.github.com/users/user06039/followers",
"following_url": "https://api.github.com/users/user06039/following{/other_user}",
"gists_url": "https://api.github.com/users/user06039/gists{/gist_id}",
"starred_url": "https://api.github.com/users/user06039/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/user06039/subscriptions",
"organizations_url": "https://api.github.com/users/user06039/orgs",
"repos_url": "https://api.github.com/users/user06039/repos",
"events_url": "https://api.github.com/users/user06039/events{/privacy}",
"received_events_url": "https://api.github.com/users/user06039/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"I have the same error.",
"Me too"
] | 1,616 | 1,678 | 1,619 | NONE | null | I had a 16 class classification dataset, but I am getting an error when using longformer, what am I doing wrong here?
```
from transformers import LongformerTokenizerFast, TFLongformerForSequenceClassification
import tensorflow as tf
import pickle
tokenizer = LongformerTokenizerFast.from_pretrained('allenai/longformer-base-4096')
model = TFLongformerForSequenceClassification.from_pretrained('allenai/longformer-base-4096', num_labels=16, gradient_checkpointing=True)
df = pd.read_csv("dataset.csv")
df1 = pd.read_csv("dataset1.csv")
y_train = pickle.load(open("y_train.pkl", "rb"))
y_test = pickle.load(open("y_test.pkl", "rb"))
x_train = tokenizer(df.posts.tolist(), max_length=1500, return_tensors="tf", padding="max_length", truncation=True)
x_test = tokenizer(df1.posts.tolist(), max_length=1500, return_tensors="tf", padding="max_length", truncation=True)
print(y_train.nunique()) # return 16
model.fit(x_train, y_train, batch_size=24,
steps_per_epoch=steps_per_epoch,
validation_data=(x_test, y_test))
```
Why do I get this shape mismatch error? What am I doing wrong.
```
WARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
WARNING:tensorflow:AutoGraph could not transform <bound method Socket.send of <zmq.sugar.socket.Socket object at 0x7fd3d7fdf9a0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: module, class, method, function, traceback, frame, or code object was expected, got cython_function_or_method
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <bound method Socket.send of <zmq.sugar.socket.Socket object at 0x7fd3d7fdf9a0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: module, class, method, function, traceback, frame, or code object was expected, got cython_function_or_method
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
WARNING:tensorflow:From /home/intellectfaces/Desktop/Work/apps/env/lib/python3.8/site-packages/tensorflow/python/ops/array_ops.py:5043: calling gather (from tensorflow.python.ops.array_ops) with validate_indices is deprecated and will be removed in a future version.
Instructions for updating:
The `validate_indices` argument has no effect. Indices are always validated on CPU and never validated on GPU.
WARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
WARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
---------------------------------------------------------------------------
InvalidArgumentError Traceback (most recent call last)
<ipython-input-17-c08295c7f1ca> in <module>
----> 1 model.fit(x_train, y_train, batch_size=24,
2 steps_per_epoch=steps_per_epoch,
3 validation_data=(x_test, y_test))
~/Desktop/Work/apps/env/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
1155 _r=1):
1156 callbacks.on_train_batch_begin(step)
-> 1157 tmp_logs = self.train_function(iterator)
1158 if data_handler.should_sync:
1159 context.async_wait()
~/Desktop/Work/apps/env/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds)
865 tracing_count = self.experimental_get_tracing_count()
866 with trace.Trace(self._name) as tm:
--> 867 result = self._call(*args, **kwds)
868 compiler = "xla" if self._jit_compile else "nonXla"
869 new_tracing_count = self.experimental_get_tracing_count()
~/Desktop/Work/apps/env/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)
926 # Lifting succeeded, so variables are initialized and we can run the
927 # stateless function.
--> 928 return self._stateless_fn(*args, **kwds)
929 else:
930 _, _, _, filtered_flat_args = \
~/Desktop/Work/apps/env/lib/python3.8/site-packages/tensorflow/python/eager/function.py in __call__(self, *args, **kwargs)
3016 (graph_function,
3017 filtered_flat_args) = self._maybe_define_function(args, kwargs)
-> 3018 return graph_function._call_flat(
3019 filtered_flat_args, captured_inputs=graph_function.captured_inputs) # pylint: disable=protected-access
3020
~/Desktop/Work/apps/env/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _call_flat(self, args, captured_inputs, cancellation_manager)
1958 and executing_eagerly):
1959 # No tape is watching; skip to running the function.
-> 1960 return self._build_call_outputs(self._inference_function.call(
1961 ctx, args, cancellation_manager=cancellation_manager))
1962 forward_backward = self._select_forward_and_backward_functions(
~/Desktop/Work/apps/env/lib/python3.8/site-packages/tensorflow/python/eager/function.py in call(self, ctx, args, cancellation_manager)
589 with _InterpolateFunctionError(self):
590 if cancellation_manager is None:
--> 591 outputs = execute.execute(
592 str(self.signature.name),
593 num_outputs=self._num_outputs,
~/Desktop/Work/apps/env/lib/python3.8/site-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
57 try:
58 ctx.ensure_initialized()
---> 59 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
60 inputs, attrs, num_outputs)
61 except core._NotOkStatusException as e:
InvalidArgumentError: 2 root error(s) found.
(0) Invalid argument: Incompatible shapes: [24,1536,12,514] vs. [24,1536,12,513]
[[node gradient_tape/tf_longformer_for_sequence_classification/longformer/encoder/layer_._0/attention/self/BroadcastGradientArgs_1 (defined at <ipython-input-17-c08295c7f1ca>:1) ]]
[[tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/cond_1/pivot_t/_985/_1717]]
(1) Invalid argument: Incompatible shapes: [24,1536,12,514] vs. [24,1536,12,513]
[[node gradient_tape/tf_longformer_for_sequence_classification/longformer/encoder/layer_._0/attention/self/BroadcastGradientArgs_1 (defined at <ipython-input-17-c08295c7f1ca>:1) ]]
0 successful operations.
0 derived errors ignored. [Op:__inference_train_function_95332]
Function call stack:
train_function -> train_function
```
### Environment info
transformers version: 4.3.3
Platform: Ubuntu 20.04 LTS
Python version: 3.8.x
PyTorch version (GPU?): 1.8.0+cu111
Tensorflow version (GPU?): 2.5.0-dev20210311
CUDA: cuda_11.1
Using GPU in script?: Yes
Using distributed or parallel set-up in script?: No | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10894/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10894/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10893 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10893/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10893/comments | https://api.github.com/repos/huggingface/transformers/issues/10893/events | https://github.com/huggingface/transformers/issues/10893 | 840,357,637 | MDU6SXNzdWU4NDAzNTc2Mzc= | 10,893 | [trainer] large scale models support | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I'm not sure you are aware but the `Trainer` can take a `model_init` parameter that... well... creates the model ;-) Have you explored how it could help with this particular problem?\r\n\r\nThe changes in the other parts of the lib look reasonable to me at first glance.",
"Thanks for the very detailed summary @stas00! All of the changes you propose make sense. The changes to `from_pretrained` look inevitable, and the approach you propose looks like it does the job without being invasive in other parts of the library that we want to keep readable like the model files.\r\n\r\nI know the API isn't final and prone to changes, but could we imagine a flag like `deepspeed_aware_instantiation` or `deepspeed_partitioning` in the `from_pretrained` method, rather than a `deepspeed_is_zero3_enabled(True)`?\r\nI think this would be more in line with how we manage things in the library from the user's perspective (which is principally through kwargs). I know none of this is final, but thinking of the API beforehand doesn't sound like a bad idea before everything is implemented :)",
"> I'm not sure you are aware but the `Trainer` can take a `model_init` parameter that... well... creates the model\r\n\r\nI need trainer init to complete before `model_init` is called then - i.e. I need the fully initialized Trainer object inside `model_init`. \r\n\r\nthe model init will depends on how the training is expected to run, so I guess we need a sort of trainer - pre-init :)",
"> I know the API isn't final and prone to changes, but could we imagine a flag like `deepspeed_aware_instantiation` or `deepspeed_partitioning` in the `from_pretrained` method, rather than a `deepspeed_is_zero3_enabled(True)`?\r\n> I think this would be more in line with how we manage things in the library from the user's perspective (which is principally through kwargs). I know none of this is final, but thinking of the API beforehand doesn't sound like a bad idea before everything is implemented :)\r\n\r\nAbsolutely, this would be ideal. \r\n\r\nthe problem with this approach is that for example we will have to change all examples to support new arguments to `.from_pretrained`, that's why I am trying to make it work transparently.\r\n\r\nBut the examples will still have to call `deepspeed_is_zero3_enabled()` and just pass the result to `.from_pretrained`...\r\n\r\nBut we could support both - if the new argument is there we use it, if it's not there as a fallback we check the global state helper function. I'm trying to solve this on the global level and not just on the use-case where the user calls each function explicitly (which is not the case in examples, but of course we could change them too.)\r\n\r\nI'm totally not attached to the proposed way, we can choose what resonates the most.\r\n\r\nThank you for your feedback, @LysandreJik and @sgugger ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,616 | 1,621 | 1,621 | CONTRIBUTOR | null | As I am integrating DeepSpeed ZeRO-3 which can run on hundreds of gpus and train models with Trillion of params
https://github.com/huggingface/transformers/pull/10753 I see an emerging need to adjust how the trainer is used.
Currently the usage is:
```
model = T5ForConditionalGeneration.from_pretrained("t5-small")
trainer = Trainer(model=model, ....)
trainer.train()
```
The problem is that this implies that the model can fit in the first node's general RAM and it's not always the case. So for example in my PR I propose the following change:
```
from transformers.integrations import deepspeed_is_zero3_enabled
deepspeed_is_zero3_enabled(True)
model = T5ForConditionalGeneration.from_pretrained("t5-small")
```
and I change `from_pretrained` to not init the model right away on cpu and to deal with pre-trained weights loading directly on all participating gpus - which allows loading models that are bigger than one gpu. Since the PR hasn't been reviewed yet - (I'm still working on it), the API may change, but the what I'm trying t communicate here is that we need DeepSpeed configuration before we create the model. This change is only needed for ZeRO3 and at the moment I have no knowledge of that until the trainer is created. (but I'm changing this).
While we can automagically can discover if we are running under zero3 if a user is using cl args and passes `--deepspeed ds_config.json`, but I can't do this if a user isn't using the command line to launch the script.
In addition in the Trainer we already have a ton of logic where we purposefully don't `model.to(device)` - so it's another indication where the model placement needs a special treatment.
So the paradigm shift that may have to happen is where we init the `Trainer` first, gather all the info we need about how the model will be used. Then we init the model and pass it to the existing Trainer object, then we train. So something like:
```
trainer = Trainer(...)
new_model_init_specific_args = trainer.model_init_specific_args()
model = T5ForConditionalGeneration.from_pretrained("t5-small", **new_model_init_specific_args)
trainer.model(model)
trainer.train()
```
Please let me know if the need makes sense.
I think I can manage the current PR with some hacks to avoid this, but eventually I think we will need to switch to something that I proposed here to move into the future where we support very large models.
Nothing that needs to be done right away, just sharing the emerging need.
Here is a bit of a preview of how I had to change `from_pretrained()`:
https://github.com/huggingface/transformers/blob/538a4026a1c6c477c1932b435dcce7cbacfc5898/src/transformers/modeling_utils.py#L1062-L1068
https://github.com/huggingface/transformers/blob/538a4026a1c6c477c1932b435dcce7cbacfc5898/src/transformers/modeling_utils.py#L1124-L1135
This allows loading the exact partition of the params for each gpu w/o ever loading it in CPU or a single gpu (well state_dict loading is a problem at the moment as it still gets fully copied in cpu, but we will have to sort this out down the road).
In the following addition, we invade `generation_utils` because now we have to make all gpus work in sync and can't stop running `forward` until all gpus finished generating their sequence.
https://github.com/huggingface/transformers/blob/538a4026a1c6c477c1932b435dcce7cbacfc5898/src/transformers/generation_utils.py#L1273-L1287
so that's another new concept, but this one is less of an issue with how the Trainer is run - just wanted to give a complete picture of the major needs. (And this particular code will change a bit thanks to @patrickvonplaten's commentary - just didn't get to do it yet)
Please also feel free to comment in the PR directly as that part of the code is pretty complete. I just made this issue separate to discuss the bigger need.
Thank you!
@sgugger, @LysandreJik, @patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10893/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10893/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10892 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10892/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10892/comments | https://api.github.com/repos/huggingface/transformers/issues/10892/events | https://github.com/huggingface/transformers/issues/10892 | 840,219,002 | MDU6SXNzdWU4NDAyMTkwMDI= | 10,892 | ImportError: cannot import name 'BertLayerNorm' when upgrading to latest transformers | {
"login": "gsrivas4",
"id": 23170843,
"node_id": "MDQ6VXNlcjIzMTcwODQz",
"avatar_url": "https://avatars.githubusercontent.com/u/23170843?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gsrivas4",
"html_url": "https://github.com/gsrivas4",
"followers_url": "https://api.github.com/users/gsrivas4/followers",
"following_url": "https://api.github.com/users/gsrivas4/following{/other_user}",
"gists_url": "https://api.github.com/users/gsrivas4/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gsrivas4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gsrivas4/subscriptions",
"organizations_url": "https://api.github.com/users/gsrivas4/orgs",
"repos_url": "https://api.github.com/users/gsrivas4/repos",
"events_url": "https://api.github.com/users/gsrivas4/events{/privacy}",
"received_events_url": "https://api.github.com/users/gsrivas4/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1897896961,
"node_id": "MDU6TGFiZWwxODk3ODk2OTYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Migration",
"name": "Migration",
"color": "e99695",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"There is no `BertLayerNorm` anymore since all it was adding has been ported to main PyTorch. The BERT model is now jusing `torch.nn.LayerNorm`.\r\n\r\nSo to make your code working, instead of trying to import it from transformers, just define it as:\r\n```\r\nBertLayerNorm = torch.nn.LayerNorm\r\n```",
"@sgugger your suggestion resolved the issue. Thanks!",
"Closing then :-)"
] | 1,616 | 1,617 | 1,617 | NONE | null | # 📚 Migration
## Getting error when upgrading from pytorch-transformers to transformers
<!-- Important information -->
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: Yes
* [ ] my own modified scripts: Yes
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: No
* [ ] my own task or dataset: No
## Details
I am using Oscar repo (https://github.com/microsoft/Oscar), which uses an older version of Huggingface pytorch-transformers (https://github.com/huggingface/transformers/tree/067923d3267325f525f4e46f357360c191ba562e). I am trying to upgrade the repo to use latest version of transformers (https://github.com/huggingface/transformers). However, I am getting following error when I try to use the latest version of transformers:
```
Traceback (most recent call last):
File "oscar/run_captioning.py", line 22, in <module>
from oscar.modeling.modeling_bert import BertForImageCaptioning
File "/home/default/ephemeral_drive/work/image_captioning/Oscar_edited/oscar/modeling/modeling_bert.py", line 15, in <module>
from transformers.models.bert.modeling_bert import BertLayerNorm
ImportError: cannot import name 'BertLayerNorm'
```
I have tried running an example script given with the latest transformers repo- https://github.com/huggingface/transformers/blob/master/examples/research_projects/movement-pruning/emmental/modeling_bert_masked.py, which uses `BertLayerNorm`, but that gives following error:
```
$ python emmental/modeling_bert_masked.py
Traceback (most recent call last):
File "emmental/modeling_bert_masked.py", line 29, in <module>
from emmental import MaskedBertConfig
File "/home/default/ephemeral_drive/work/image_captioning/Oscar_edited/transformers/examples/research_projects/movement-pruning/emmental/__init__.py", line 3, in <module>
from .modeling_bert_masked import (
File "/home/default/ephemeral_drive/work/image_captioning/Oscar_edited/transformers/examples/research_projects/movement-pruning/emmental/modeling_bert_masked.py", line 33, in <module>
from transformers.models.bert.modeling_bert import ACT2FN, BertLayerNorm, load_tf_weights_in_bert
ImportError: cannot import name 'BertLayerNorm'
```
I tried looking for definition of `BertLayerNorm` in the current version of transformers, but it is not present there. The definition is present in the older version of transformer, here - https://github.com/huggingface/transformers/blob/067923d3267325f525f4e46f357360c191ba562e/pytorch_transformers/modeling_bert.py#L223-L240
How can I import `BertLayerNorm` in my project using the latest transformer?
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: https://github.com/huggingface/transformers
- Platform: x86_64 GNU/Linux
- Python version: 3.6.8
- PyTorch version (GPU?): 1.7.0+cu101 (GPU)
- Tensorflow version (GPU?): 2.3.0 (GPU)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: No
<!-- IMPORTANT: which version of the former library do you use? -->
* `pytorch-transformers` or `pytorch-pretrained-bert` version (or branch): https://github.com/huggingface/transformers/tree/067923d3267325f525f4e46f357360c191ba562e
## Checklist
- [ Yes] I have read the migration guide in the readme.
([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers);
[pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers))
- [ yes] I checked if a related official extension example runs on my machine.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10892/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10892/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10891 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10891/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10891/comments | https://api.github.com/repos/huggingface/transformers/issues/10891/events | https://github.com/huggingface/transformers/pull/10891 | 840,126,146 | MDExOlB1bGxSZXF1ZXN0NjAwMDM3MDU1 | 10,891 | Update Training Arguments Documentation: ignore_skip_data -> ignore_data_skip | {
"login": "siddk",
"id": 2498509,
"node_id": "MDQ6VXNlcjI0OTg1MDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2498509?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/siddk",
"html_url": "https://github.com/siddk",
"followers_url": "https://api.github.com/users/siddk/followers",
"following_url": "https://api.github.com/users/siddk/following{/other_user}",
"gists_url": "https://api.github.com/users/siddk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/siddk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/siddk/subscriptions",
"organizations_url": "https://api.github.com/users/siddk/orgs",
"repos_url": "https://api.github.com/users/siddk/repos",
"events_url": "https://api.github.com/users/siddk/events{/privacy}",
"received_events_url": "https://api.github.com/users/siddk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,616 | 1,616 | CONTRIBUTOR | null | Currently, docs/docstring for TrainingArguments refers to `ignore_skip_data` as the argument for skipping dataloader replay on resume. However, actual argument is called `ignored_data_skip` which leads to errors if you just go off the docs.
(Separate note, doing a full replay for long runs is pretty annoying --> thinking about a way to eliminate this/speed up considerably, but would love to here what the Transformers team is up to in this regard!).
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
Tagging @sgugger as this has to do with documentation. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10891/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10891/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10891",
"html_url": "https://github.com/huggingface/transformers/pull/10891",
"diff_url": "https://github.com/huggingface/transformers/pull/10891.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10891.patch",
"merged_at": 1616618691000
} |
https://api.github.com/repos/huggingface/transformers/issues/10890 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10890/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10890/comments | https://api.github.com/repos/huggingface/transformers/issues/10890/events | https://github.com/huggingface/transformers/pull/10890 | 840,061,982 | MDExOlB1bGxSZXF1ZXN0NTk5OTgwODUx | 10,890 | Remove version warning in pretrained BART models | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,616 | 1,616 | COLLABORATOR | null | # What does this PR do?
This PR fixes the warnings when loading any pretrained BART model:
```
Some weights of the model checkpoint at facebook/bart-large-mnli were not used when initializing BartModelForSequenceClassification: ['model.encoder.version', 'model.decoder.version']
- This IS expected if you are initializing BartModelForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BartModelForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10890/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10890/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10890",
"html_url": "https://github.com/huggingface/transformers/pull/10890",
"diff_url": "https://github.com/huggingface/transformers/pull/10890.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10890.patch",
"merged_at": 1616613700000
} |
https://api.github.com/repos/huggingface/transformers/issues/10889 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10889/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10889/comments | https://api.github.com/repos/huggingface/transformers/issues/10889/events | https://github.com/huggingface/transformers/pull/10889 | 840,057,322 | MDExOlB1bGxSZXF1ZXN0NTk5OTc2NzI5 | 10,889 | Fix overflowing bad word ids | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,616 | 1,616 | MEMBER | null | As of now, bad word IDs are not checked when added to the configuration/passed as inputs to the generate method.
This is an issue when an invalid bad word ID is defined: if the vocab size is 30k, then defining a bad word ID for `30001` crashes the generation function with the following error:
```
torch.sparse.LongTensor(banned_mask.t(), indices, scores.size()).to(scores.device).to_dense().bool()
RuntimeError: size is inconsistent with indices: for dim 1, size is 30000 but found index 30001
```
Please let me know if you think this should raise a better error instead, rather than a warning. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10889/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10889/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10889",
"html_url": "https://github.com/huggingface/transformers/pull/10889",
"diff_url": "https://github.com/huggingface/transformers/pull/10889.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10889.patch",
"merged_at": 1616613236000
} |
https://api.github.com/repos/huggingface/transformers/issues/10888 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10888/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10888/comments | https://api.github.com/repos/huggingface/transformers/issues/10888/events | https://github.com/huggingface/transformers/pull/10888 | 840,049,189 | MDExOlB1bGxSZXF1ZXN0NTk5OTY5Nzcy | 10,888 | Instantiate model only once in pipeline | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Checked all the slow tests run. I have no idea on how to implement a test that checks the model is only loaded once, so I'm going merge this and if anyone wants to tackle that, it can be done in a separate PR."
] | 1,616 | 1,617 | 1,617 | COLLABORATOR | null | # What does this PR do?
The current implementation of `pipeline` is inefficient in the sense it instantiates the model twice just to guess the proper framework. This PR does not add any breaking change but reworks the function that infers the framework from the model to:
1. instantiate the proper class of the model (this avoids getting weird warnings about missing weights)
2. return the model instantiated so it's not re-instantiated later on.
cc @mfuntowicz and @Narsil | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10888/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10888/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10888",
"html_url": "https://github.com/huggingface/transformers/pull/10888",
"diff_url": "https://github.com/huggingface/transformers/pull/10888.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10888.patch",
"merged_at": 1617028754000
} |
https://api.github.com/repos/huggingface/transformers/issues/10887 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10887/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10887/comments | https://api.github.com/repos/huggingface/transformers/issues/10887/events | https://github.com/huggingface/transformers/issues/10887 | 840,005,521 | MDU6SXNzdWU4NDAwMDU1MjE= | 10,887 | Error Loading a Hub Model (Multilingual-MiniLM) | {
"login": "vyraun",
"id": 17217068,
"node_id": "MDQ6VXNlcjE3MjE3MDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/17217068?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vyraun",
"html_url": "https://github.com/vyraun",
"followers_url": "https://api.github.com/users/vyraun/followers",
"following_url": "https://api.github.com/users/vyraun/following{/other_user}",
"gists_url": "https://api.github.com/users/vyraun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vyraun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vyraun/subscriptions",
"organizations_url": "https://api.github.com/users/vyraun/orgs",
"repos_url": "https://api.github.com/users/vyraun/repos",
"events_url": "https://api.github.com/users/vyraun/events{/privacy}",
"received_events_url": "https://api.github.com/users/vyraun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Please note: This checkpoint uses BertModel with XLMRobertaTokenizer so AutoTokenizer won't work with this checkpoint!"
] | 1,616 | 1,616 | 1,616 | NONE | null | ## Code Snippet
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("microsoft/Multilingual-MiniLM-L12-H384")
model = AutoModel.from_pretrained("microsoft/Multilingual-MiniLM-L12-H384")
```
- `transformers` version: 4.1.1, 3.1.0 (error in both)
## Error
```
TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType
```
## Expected behavior
The [model and tokenizer](https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384) loads correctly. The error could be reproduced in a [colab notebook](https://colab.research.google.com/drive/1uFnBN-WdpK4PiamvdyizMCzJsrBtewKx?usp=sharing) . | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10887/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10887/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10886 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10886/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10886/comments | https://api.github.com/repos/huggingface/transformers/issues/10886/events | https://github.com/huggingface/transformers/pull/10886 | 839,883,567 | MDExOlB1bGxSZXF1ZXN0NTk5ODI2MDE0 | 10,886 | Fix comment in modeling_t5.py | {
"login": "lexhuismans",
"id": 43178421,
"node_id": "MDQ6VXNlcjQzMTc4NDIx",
"avatar_url": "https://avatars.githubusercontent.com/u/43178421?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lexhuismans",
"html_url": "https://github.com/lexhuismans",
"followers_url": "https://api.github.com/users/lexhuismans/followers",
"following_url": "https://api.github.com/users/lexhuismans/following{/other_user}",
"gists_url": "https://api.github.com/users/lexhuismans/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lexhuismans/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lexhuismans/subscriptions",
"organizations_url": "https://api.github.com/users/lexhuismans/orgs",
"repos_url": "https://api.github.com/users/lexhuismans/repos",
"events_url": "https://api.github.com/users/lexhuismans/events{/privacy}",
"received_events_url": "https://api.github.com/users/lexhuismans/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,616 | 1,616 | CONTRIBUTOR | null | # What does this PR do?
This PR completes an incomplete comment in the modeling_t5.py file.
`# ourselves in which case we just need to make it broadcastable to all heads.`
to
```
# We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length]
# ourselves in which case we just need to make it broadcastable to all heads.
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10886/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10886/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10886",
"html_url": "https://github.com/huggingface/transformers/pull/10886",
"diff_url": "https://github.com/huggingface/transformers/pull/10886.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10886.patch",
"merged_at": 1616696636000
} |
https://api.github.com/repos/huggingface/transformers/issues/10885 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10885/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10885/comments | https://api.github.com/repos/huggingface/transformers/issues/10885/events | https://github.com/huggingface/transformers/issues/10885 | 839,796,220 | MDU6SXNzdWU4Mzk3OTYyMjA= | 10,885 | Memory accumulates when training in a loop | {
"login": "joawar",
"id": 46854160,
"node_id": "MDQ6VXNlcjQ2ODU0MTYw",
"avatar_url": "https://avatars.githubusercontent.com/u/46854160?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joawar",
"html_url": "https://github.com/joawar",
"followers_url": "https://api.github.com/users/joawar/followers",
"following_url": "https://api.github.com/users/joawar/following{/other_user}",
"gists_url": "https://api.github.com/users/joawar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joawar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joawar/subscriptions",
"organizations_url": "https://api.github.com/users/joawar/orgs",
"repos_url": "https://api.github.com/users/joawar/repos",
"events_url": "https://api.github.com/users/joawar/events{/privacy}",
"received_events_url": "https://api.github.com/users/joawar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I tried adding\r\n```python\r\ndel training_args\r\ndel trainer\r\ndel model_config\r\ndel run\r\ngc.collect()\r\nth.cuda.empty_cache()\r\n```\r\nto the end of each loop, but it does not seem to change anything. ",
"I think the memory problem comes from the wandb integration. I do not see the problem without it: memory resets at 0 at each new step of the loop and goes back to the same max value.",
"Use torc with no grad inside for loop",
"Seems like the same problem occurs with wandb's sweeps, so it looks like a wandb problem more than a huggingface one. I can't use wandb then, sucks :/",
"cc @borisdayma so you are aware."
] | 1,616 | 1,616 | 1,616 | CONTRIBUTOR | null | The problem is that GPU memory allocated accumulates for each run. This eventually results in a `RuntimeError: CUDA out of memory` error. You can see the wandb GPU memory allocated, produced by the code below, here: [wandb](https://wandb.ai/jwa018/Bug/reports/Shared-panel-21-03-24-15-03-56--Vmlldzo1NTYxODI?accessToken=6euxv33b2zmga0uwegtws13724totvgs13hr6l1ni4bsek376cutfte3l3gtx5dz)
I had the same problem when using Trainer's built in hyperparameter_search, which also runs training in a loop I assume.
Similar issues from the past are:
https://github.com/huggingface/transformers/issues/1742
https://github.com/huggingface/transformers/issues/1134
https://gitmemory.com/issue/huggingface/transformers/9929/770965726
## Environment info
- `transformers` version: 4.4.2
- Platform: Linux-4.15.0-128-generic-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyTorch version (GPU?): 1.8.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: I don't explicitly use GPU but I assume the Trainer object does. See code below
- Using distributed or parallel set-up in script?: No
### Who can help
Library:
- trainer: @sgugger
## Information
Model I am using (Bert, XLNet ...): `BertForSequenceClassification.from_pretrained('bert-base-cased')`
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
I have my own dataset, but I've reproduced the issue wtih the Amazon polarity dataset from huggingface's datasets
## To reproduce
Steps to reproduce the behavior:
1. Create Trainer object in a loop
2. Run training in the loop
This code reproduces the error.
```python
from transformers import (
BertForSequenceClassification,
BertTokenizer,
Trainer,
TrainingArguments,
BertConfig,
)
from datasets import load_dataset
from torch.utils.data import Dataset
import torch as th
import wandb
import os
class AmazonDataset(Dataset):
def __init__(self, data, tokenizer, max_len):
self.tokenizer = tokenizer
self.text = data['content']
self.labels = data['label']
self.max_len = max_len
self.n_datapoints = len(self.labels)
def __len__(self):
return self.n_datapoints
def __getitem__(self, idx):
text = self.text[idx]
assert type(text) is str
inputs = self.tokenizer(
text=text,
text_pair=None,
add_special_tokens=True,
padding='max_length',
truncation=True,
max_length=self.max_len,
return_tensors='pt'
)
return {
'input_ids': th.flatten(inputs['input_ids']).type(th.long),
'token_type_ids': th.flatten(
inputs['token_type_ids']).type(th.long),
'attention_mask': th.flatten(
inputs['attention_mask']).type(th.long),
'labels': th.tensor(self.labels[idx], dtype=th.long)
}
def model_init():
return BertForSequenceClassification.from_pretrained(
MODEL_NAME, return_dict=True
)
if __name__ == '__main__':
os.environ['WANDB_WATCH'] = 'all'
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
dataset = load_dataset('amazon_polarity')
train = AmazonDataset(
data=dataset['train'][:5000],
tokenizer=tokenizer,
max_len=300
)
test = AmazonDataset(
data=dataset['test'][:500],
tokenizer=tokenizer,
max_len=300
)
MODEL_NAME = 'bert-base-cased'
N_EPOCHS = 1
warmup_steps = int(len(train)*N_EPOCHS)
for i in range(10):
training_args = TrainingArguments(
output_dir='output',
do_train=True,
do_eval=True,
evaluation_strategy='steps',
learning_rate=2e-5,
weight_decay=0.1,
logging_steps=50,
per_device_eval_batch_size=30,
per_device_train_batch_size=15,
seed=1,
num_train_epochs=N_EPOCHS,
disable_tqdm=True,
report_to=['wandb'],
load_best_model_at_end=False,
lr_scheduler_type='linear',
warmup_steps=warmup_steps
)
model_config = BertConfig(
vocab_size=tokenizer.vocab_size,
pretrained_model_name_or_path=MODEL_NAME,
num_labels=2,
return_dict=True
)
trainer = Trainer(
args=training_args,
train_dataset=train,
eval_dataset=test,
tokenizer=tokenizer,
model_init=model_init
)
run = wandb.init(
project='Bug',
name=f'Bug{i}'
)
trainer.train()
run.finish()
```
## Expected behavior
The loops runs without memory accumulating for each run.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10885/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10885/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10884 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10884/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10884/comments | https://api.github.com/repos/huggingface/transformers/issues/10884/events | https://github.com/huggingface/transformers/issues/10884 | 839,690,619 | MDU6SXNzdWU4Mzk2OTA2MTk= | 10,884 | Wav2vec2 Training Loss not decreasing | {
"login": "gauravgund",
"id": 46312442,
"node_id": "MDQ6VXNlcjQ2MzEyNDQy",
"avatar_url": "https://avatars.githubusercontent.com/u/46312442?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gauravgund",
"html_url": "https://github.com/gauravgund",
"followers_url": "https://api.github.com/users/gauravgund/followers",
"following_url": "https://api.github.com/users/gauravgund/following{/other_user}",
"gists_url": "https://api.github.com/users/gauravgund/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gauravgund/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gauravgund/subscriptions",
"organizations_url": "https://api.github.com/users/gauravgund/orgs",
"repos_url": "https://api.github.com/users/gauravgund/repos",
"events_url": "https://api.github.com/users/gauravgund/events{/privacy}",
"received_events_url": "https://api.github.com/users/gauravgund/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It seems that your training epochs is set to 1500. Set it up to 5 for a quick trial",
"It is giving training loss of more than a 100 in that case!",
"Is your training finished or is still running? If it is the second, just try again with less number of epochs for example",
"This is the output for 5 epochs:\r\n\r\n\r\nTrainOutput(global_step=10, training_loss=93.45169677734376, metrics={'train_runtime': 48.9011, 'train_samples_per_second': 0.204, 'total_flos': 2.6027104384512e+16, 'epoch': 5.0, 'init_mem_cpu_alloc_delta': 348007, 'init_mem_gpu_alloc_delta': 377847808, 'init_mem_cpu_peaked_delta': 18306, 'init_mem_gpu_peaked_delta': 0, 'train_mem_cpu_alloc_delta': 706705, 'train_mem_gpu_alloc_delta': 1120621568, 'train_mem_cpu_peaked_delta': 161498645, 'train_mem_gpu_peaked_delta': 7221921792})",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,616 | 1,621 | 1,621 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:4.4.0
- Platform: Google Colab
- Python version: python 3.6
@patrickvonplaten
Models:
wav2vec2
I am following the recent implementation of wav2vec2 for fine-tuning:
https://huggingface.co/blog/fine-tune-wav2vec2-english
Settings:
Pretrained model: "facebook/wav2vec2-base-960h",
gradient_checkpointing=True,
ctc_loss_reduction="mean",
pad_token_id=processor.tokenizer.pad_token_id
attention_dropout=0.1,
hidden_dropout=0.1,
feat_proj_dropout=0.0,
mask_time_prob=0.05,
layerdrop=0.1,
gradient_checkpointing=True,
ctc_loss_reduction="mean",
group_by_length=True,
per_device_train_batch_size=32,
evaluation_strategy="steps",
num_train_epochs=1500,
fp16=True,
save_steps=400, #this would mean every 400 steps model gets saved which also means Google drive gets full
eval_steps=400,
logging_steps=400,
learning_rate=0.0005,
warmup_steps=500,
save_total_limit=2,
Issue:
Step | Training Loss | Validation Loss | Wer | Runtime | Samples Per Second
-- | -- | -- | -- | -- | --
400 | 5.063200 | 4.566135 | 1.000000 | 0.715900 | 6.984000
800 | 5.115200 | 4.514411 | 1.000000 | 0.732400 | 6.827000
1200 | 5.119200 | 4.485986 | 1.000000 | 0.724300 | 6.903000
The training loss is marginally decreasing and WER is still 1. What can be done to improve and faster training with better accuracy.
I also tried with a higher learning rate but training loss was still very poor, it seems the model is not converging. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10884/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10884/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10883 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10883/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10883/comments | https://api.github.com/repos/huggingface/transformers/issues/10883/events | https://github.com/huggingface/transformers/pull/10883 | 839,548,282 | MDExOlB1bGxSZXF1ZXN0NTk5NTQ1MTQ1 | 10,883 | [Community notebooks] Add notebook for fine-tuning Bart with Trainer in two langs | {
"login": "elsanns",
"id": 3648991,
"node_id": "MDQ6VXNlcjM2NDg5OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3648991?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elsanns",
"html_url": "https://github.com/elsanns",
"followers_url": "https://api.github.com/users/elsanns/followers",
"following_url": "https://api.github.com/users/elsanns/following{/other_user}",
"gists_url": "https://api.github.com/users/elsanns/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elsanns/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elsanns/subscriptions",
"organizations_url": "https://api.github.com/users/elsanns/orgs",
"repos_url": "https://api.github.com/users/elsanns/repos",
"events_url": "https://api.github.com/users/elsanns/events{/privacy}",
"received_events_url": "https://api.github.com/users/elsanns/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,616 | 1,616 | CONTRIBUTOR | null | Add a community notebook on fine-tuning Bart for summarization on wiki_lingua with Trainer.
Includes:
- a non-English example (English, French)
- DataCollatorForSeq2Seq
- label padding with -100 (ignore in loss)
- Wandb integration | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10883/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10883/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10883",
"html_url": "https://github.com/huggingface/transformers/pull/10883",
"diff_url": "https://github.com/huggingface/transformers/pull/10883.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10883.patch",
"merged_at": 1616598218000
} |
https://api.github.com/repos/huggingface/transformers/issues/10882 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10882/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10882/comments | https://api.github.com/repos/huggingface/transformers/issues/10882/events | https://github.com/huggingface/transformers/issues/10882 | 839,522,808 | MDU6SXNzdWU4Mzk1MjI4MDg= | 10,882 | AttributeError: 'RobertaConfig' object has no attribute 'attn_type' | {
"login": "sldsee18",
"id": 81295234,
"node_id": "MDQ6VXNlcjgxMjk1MjM0",
"avatar_url": "https://avatars.githubusercontent.com/u/81295234?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sldsee18",
"html_url": "https://github.com/sldsee18",
"followers_url": "https://api.github.com/users/sldsee18/followers",
"following_url": "https://api.github.com/users/sldsee18/following{/other_user}",
"gists_url": "https://api.github.com/users/sldsee18/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sldsee18/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sldsee18/subscriptions",
"organizations_url": "https://api.github.com/users/sldsee18/orgs",
"repos_url": "https://api.github.com/users/sldsee18/repos",
"events_url": "https://api.github.com/users/sldsee18/events{/privacy}",
"received_events_url": "https://api.github.com/users/sldsee18/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Found solution from [#10446](https://github.com/huggingface/transformers/issues/10446).\r\n\r\nShould follow this step instead:\r\n`git clone https://github.com/huggingface/transformers`\r\n\r\n`cd transformers`\r\n\r\n`pip install .`\r\n\r\n"
] | 1,616 | 1,616 | 1,616 | NONE | null | **Environment**
Google Colab. Installed the '4.5.0.dev0' version of transformers by `!pip install git+https://github.com/huggingface/transformers
`
**Issues**
Hi guys, I tried to fine-tune RoBERTa on WikiText-2 by following the commands shared in the examples/language-modeling section of the [github page](https://github.com/huggingface/transformers/tree/master/examples/language-modeling#robertabertdistilbert-and-masked-language-modeling) as follows:
`python run_mlm.py \
--model_name_or_path roberta-base \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--do_train \
--do_eval \
--output_dir /tmp/test-mlm`
but I ran into and error `AttributeError: 'RobertaConfig' object has no attribute 'attn_type'`. Looks like it cannot find the config needed.
Please advise What Did I do wrong. Thanks!
**To reproduce**
`python run_mlm.py \
--model_name_or_path roberta-base \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--do_train \
--do_eval \
--output_dir /tmp/test-mlm`
**Error message I got:**
`2021-03-24 08:51:51.464928: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
03/24/2021 08:51:52 - WARNING - __main__ - Process rank: -1, device: cpu, n_gpu: 0distributed training: False, 16-bits training: False
03/24/2021 08:51:53 - INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir=/tmp/test-mlm, overwrite_output_dir=False, do_train=True, do_eval=True, do_predict=False, evaluation_strategy=IntervalStrategy.NO, prediction_loss_only=False, per_device_train_batch_size=8, per_device_eval_batch_size=8, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=3.0, max_steps=-1, lr_scheduler_type=SchedulerType.LINEAR, warmup_ratio=0.0, warmup_steps=0, logging_dir=runs/Mar24_08-51-52_f7b8b5062dd4, logging_strategy=IntervalStrategy.STEPS, logging_first_step=False, logging_steps=500, save_strategy=IntervalStrategy.STEPS, save_steps=500, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level=O1, fp16_backend=auto, fp16_full_eval=False, local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name=/tmp/test-mlm, disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, sharded_ddp=[], deepspeed=None, label_smoothing_factor=0.0, adafactor=False, group_by_length=False, report_to=['tensorboard'], ddp_find_unused_parameters=None, dataloader_pin_memory=True, skip_memory_metrics=False, _n_gpu=0)
03/24/2021 08:51:53 - WARNING - datasets.builder - Reusing dataset wikitext (/root/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/47c57a6745aa5ce8e16a5355aaa4039e3aa90d1adad87cef1ad4e0f29e74ac91)
[INFO|configuration_utils.py:472] 2021-03-24 08:51:53,301 >> loading configuration file https://huggingface.co/roberta-base/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/733bade19e5f0ce98e6531021dd5180994bb2f7b8bd7e80c7968805834ba351e.35205c6cfc956461d8515139f0f8dd5d207a2f336c0c3a83b4bc8dca3518e37b
[INFO|configuration_utils.py:508] 2021-03-24 08:51:53,301 >> Model config RobertaConfig {
"architectures": [
"RobertaForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"eos_token_id": 2,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-05,
"max_position_embeddings": 514,
"model_type": "roberta",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"transformers_version": "4.5.0.dev0",
"type_vocab_size": 1,
"use_cache": true,
"vocab_size": 50265
}
[INFO|configuration_utils.py:472] 2021-03-24 08:51:53,358 >> loading configuration file https://huggingface.co/roberta-base/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/733bade19e5f0ce98e6531021dd5180994bb2f7b8bd7e80c7968805834ba351e.35205c6cfc956461d8515139f0f8dd5d207a2f336c0c3a83b4bc8dca3518e37b
[INFO|configuration_utils.py:508] 2021-03-24 08:51:53,359 >> Model config RobertaConfig {
"architectures": [
"RobertaForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"eos_token_id": 2,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-05,
"max_position_embeddings": 514,
"model_type": "roberta",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"transformers_version": "4.5.0.dev0",
"type_vocab_size": 1,
"use_cache": true,
"vocab_size": 50265
}
[INFO|tokenization_utils_base.py:1702] 2021-03-24 08:51:53,706 >> loading file https://huggingface.co/roberta-base/resolve/main/vocab.json from cache at /root/.cache/huggingface/transformers/d3ccdbfeb9aaa747ef20432d4976c32ee3fa69663b379deb253ccfce2bb1fdc5.d67d6b367eb24ab43b08ad55e014cf254076934f71d832bbab9ad35644a375ab
[INFO|tokenization_utils_base.py:1702] 2021-03-24 08:51:53,707 >> loading file https://huggingface.co/roberta-base/resolve/main/merges.txt from cache at /root/.cache/huggingface/transformers/cafdecc90fcab17011e12ac813dd574b4b3fea39da6dd817813efa010262ff3f.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b
[INFO|tokenization_utils_base.py:1702] 2021-03-24 08:51:53,707 >> loading file https://huggingface.co/roberta-base/resolve/main/tokenizer.json from cache at /root/.cache/huggingface/transformers/d53fc0fa09b8342651efd4073d75e19617b3e51287c2a535becda5808a8db287.fc9576039592f026ad76a1c231b89aee8668488c671dfbe6616bab2ed298d730
[INFO|tokenization_utils_base.py:1702] 2021-03-24 08:51:53,707 >> loading file https://huggingface.co/roberta-base/resolve/main/added_tokens.json from cache at None
[INFO|tokenization_utils_base.py:1702] 2021-03-24 08:51:53,707 >> loading file https://huggingface.co/roberta-base/resolve/main/special_tokens_map.json from cache at None
[INFO|tokenization_utils_base.py:1702] 2021-03-24 08:51:53,707 >> loading file https://huggingface.co/roberta-base/resolve/main/tokenizer_config.json from cache at None
[INFO|modeling_utils.py:1051] 2021-03-24 08:51:53,860 >> loading weights file https://huggingface.co/roberta-base/resolve/main/pytorch_model.bin from cache at /root/.cache/huggingface/transformers/51ba668f7ff34e7cdfa9561e8361747738113878850a7d717dbc69de8683aaad.c7efaa30a0d80b2958b876969faa180e485944a849deee4ad482332de65365a7
Traceback (most recent call last):
File "/content/drive/MyDrive/Colab Notebooks/run_mlm.py", line 461, in <module>
main()
File "/content/drive/MyDrive/Colab Notebooks/run_mlm.py", line 306, in main
use_auth_token=True if model_args.use_auth_token else None,
File "/usr/local/lib/python3.7/dist-packages/transformers/modeling_utils.py", line 1058, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
File "/usr/local/lib/python3.7/dist-packages/transformers/models/xlnet/modeling_xlnet.py", line 1309, in __init__
self.attn_type = config.attn_type
AttributeError: 'RobertaConfig' object has no attribute 'attn_type'`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10882/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10882/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10881 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10881/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10881/comments | https://api.github.com/repos/huggingface/transformers/issues/10881/events | https://github.com/huggingface/transformers/issues/10881 | 839,482,123 | MDU6SXNzdWU4Mzk0ODIxMjM= | 10,881 | MlFlow log artefacts | {
"login": "dmilcevski",
"id": 4984299,
"node_id": "MDQ6VXNlcjQ5ODQyOTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4984299?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dmilcevski",
"html_url": "https://github.com/dmilcevski",
"followers_url": "https://api.github.com/users/dmilcevski/followers",
"following_url": "https://api.github.com/users/dmilcevski/following{/other_user}",
"gists_url": "https://api.github.com/users/dmilcevski/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dmilcevski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dmilcevski/subscriptions",
"organizations_url": "https://api.github.com/users/dmilcevski/orgs",
"repos_url": "https://api.github.com/users/dmilcevski/repos",
"events_url": "https://api.github.com/users/dmilcevski/events{/privacy}",
"received_events_url": "https://api.github.com/users/dmilcevski/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I have no written the `MLFlowCallback` (external integrations are maintained by contributors or the authors of the external libraries themselves) but I can confirm the command will indeed not log the model weights. The callback does get the model in the kwargs, so it's completely possible to get it from there and upload it, like it's done in the `WandbCallback`.",
"Thanks @sgugger for your fast reply! \r\n\r\nI tested your suggestion, but it doesn't quite work. I don't know about the `WandbCallback`, but in `MLFlowCallback` in the `on_train_end`, when the model is saved and logged to `mlflow`, the `mlflow` run is ended, and the later loggin of metrics, like evaluation and testing are logged in separate run, which is not what we want.\r\nI don't know if this happens when you create `fake_trainer` with `fake_trainer = Trainer(args=args, model=model, tokenizer=tokenizer)` or when the artifacts are logged with `self._ml_flow.log_artifacts(temp_dir)`.\r\n\r\nAlso the current `MLFlowCallback` doesn't log testing metrics. I know this is not a fault of the callback, it is in the `trainer.predict()` method that doesn't have call to `log()` internally. The workaround is to call `trainer.log(metrics)` after \r\n```\r\ntrainer.log_metrics(\"test\", metrics)\r\ntrainer.save_metrics(\"test\", metrics)\r\n```\r\nin the example. ",
"As I said, I haven't written that callback: integrations with reporting platforms are entirely maintained by the developers of those integrations or the community. You can open a PR with your fixes!",
"I understand. However, not having a callback hook on the `save_model` would be difficult. \r\n\r\nIf somebody is interested, a dirty workaround I did is,\r\n1. Register own `MLflowCallback`\r\n\r\n```\r\ntrainer = Trainer(\r\n ...\r\n callbacks=[MLflowCallback]\r\n)\r\ntrainer.remove_callback(transformers.integrations.MLflowCallback)\r\n```\r\n2. Add method in the class:\r\n```\r\n def log_artifact(self, output_dir):\r\n if self._initialized:\r\n logger.info(\"Logging artifacts. This may take time.\")\r\n self._ml_flow.log_artifacts(output_dir)\r\n```\r\n\r\n3. In the `run_ner.py` file, at the very end (or after ` trainer.save_model()`) added\r\n```\r\nml_flow_callback = trainer.pop_callback(MLflowCallback)\r\nml_flow_callback.log_artifact(training_args.output_dir)\r\n```\r\nWhich removes the `MLflowCallback` and tells to log the model.\r\n\r\nI know it is dirty, but if I come up with better solution I will open PR.\r\n\r\nThanks!",
"> However, not having a callback hook on the save_model would be difficult.\r\n\r\nNot that this hook would be called when each checkpoint is saved, not just at the end of training. So you would not only save the last model.",
"You are right, even with my hack of logging the saved model from the `output_dir`, transfers the checkpoint models as well, which is not what we need.\r\n\r\nI think modifying `MLflowCallback.on_train_end` with the code from `Trainer._save` should save only the model in temp directory and log it to mlflow. This way, we don't lose the current mlflow run and we dont save everything from the `output_dir`.\r\n\r\n```\r\n def on_train_end(self, args, state, control, model=None, tokenizer=None, **kwargs):\r\n if self._initialized and state.is_world_process_zero and self._log_artifacts:\r\n logger.info(\"Logging artifacts. This may take time.\")\r\n with tempfile.TemporaryDirectory() as temp_dir:\r\n\r\n if not isinstance(model, PreTrainedModel):\r\n if isinstance(unwrap_model(model), PreTrainedModel):\r\n state_dict = model.state_dict()\r\n unwrap_model(model).save_pretrained(temp_dir, state_dict=state_dict)\r\n else:\r\n logger.info(\"Trainer.model is not a `PreTrainedModel`, only saving its state dict.\")\r\n state_dict = model.state_dict()\r\n torch.save(state_dict, os.path.join(temp_dir, WEIGHTS_NAME))\r\n else:\r\n state_dict = model.state_dict()\r\n model.save_pretrained(temp_dir, state_dict=state_dict)\r\n if tokenizer is not None:\r\n tokenizer.save_pretrained(temp_dir)\r\n\r\n # Good practice: save your training arguments together with the trained model\r\n torch.save(args, os.path.join(temp_dir, \"training_args.bin\"))\r\n self._ml_flow.log_artifacts(temp_dir)\r\n```\r\n\r\nIf you think this is a good idea, maybe it can be added in the `MLflowCallback` integration.\r\nThanks!",
"That sounds like the best compromise yes.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Hello, any update on this problem? I am trying to log a model with mlflow but the artifacts aren't registered.\r\n\r\nCould you please help me with this ?\r\n\r\nBest Regards,",
"> Hello, any update on this problem? I am trying to log a model with mlflow but the artifacts aren't registered.\r\n> \r\n> Could you please help me with this ?\r\n> \r\n> Best Regards,\r\n\r\nDid you export `HF_MLFLOW_LOG_ARTIFACTS` environment variable and set it to `True`?",
"I was just trying with `HF_MLFLOW_LOG_ARTIFACTS` set and nothing was appearing in the mlflow artifacts"
] | 1,616 | 1,680 | 1,619 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.4.2
- Platform: Darwin-20.3.0-x86_64-i386-64bit
- Python version: 3.7.4
- PyTorch version (GPU?): 1.3.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@sgugger
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: NER
* [ ] my own task or dataset: (give details below)
## To reproduce
The bug is for the PR #8016.
Steps to reproduce the behavior:
1. MlFlow installed and the following env variables exported
```
export HF_MLFLOW_LOG_ARTIFACTS=TRUE
export MLFLOW_S3_ENDPOINT_URL=<custom endpont>
export MLFLOW_TRACKING_URI=<custom uri>
export MLFLOW_TRACKING_TOKEN=<custom token>
```
2. Run the token classification example with the following command
```
python run_ner.py \
--model_name_or_path bert-base-uncased \
--dataset_name conll2003 \
--output_dir /tmp/test-ner \
--do_train \
--do_eval
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
When the training finishes, before the evaluation is performed, the `integrations.MLflowCallback` executes the method `on_train_end`, where if the env variable `HF_MLFLOW_LOG_ARTIFACTS` is set to `TRUE`, it logs the model artifacts to mlflow.
The problem is, however, when the method `on_train_end` is called and the following line is executed: `self._ml_flow.log_artifacts(args.output_dir)`, the model is not stored on the `args.output_dir`. The model artefacts are stored once the `trainer.save_model()` is called, which is after the training ending. There is no callback in the `trainer.save_model()` that can be called from a `TrainerCallback` to save the model. There is a method `TrainierCallback.on_save()` method, that is called `trainer._maybe_log_save_evaluate()`, but even then the model is not available on the `output_dir`.
Possible solutions would be to extend the `TrainierCallback` with `on_model_save()` callback method, insert the callback in the `trainer.save_model()`.
Or, a workaround I have now is to change `on_train_end ` with `on_evaluate` in `integrations.MLflowCallback`, that is called after the model is saved in the example script. However, this is not the right solution since it depends on having set the `do_eval` parameter, and it is not semantically correct.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10881/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10881/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10880 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10880/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10880/comments | https://api.github.com/repos/huggingface/transformers/issues/10880/events | https://github.com/huggingface/transformers/issues/10880 | 839,425,154 | MDU6SXNzdWU4Mzk0MjUxNTQ= | 10,880 | Scheduler Not Pickleable | {
"login": "iamNCJ",
"id": 28685287,
"node_id": "MDQ6VXNlcjI4Njg1Mjg3",
"avatar_url": "https://avatars.githubusercontent.com/u/28685287?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iamNCJ",
"html_url": "https://github.com/iamNCJ",
"followers_url": "https://api.github.com/users/iamNCJ/followers",
"following_url": "https://api.github.com/users/iamNCJ/following{/other_user}",
"gists_url": "https://api.github.com/users/iamNCJ/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iamNCJ/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iamNCJ/subscriptions",
"organizations_url": "https://api.github.com/users/iamNCJ/orgs",
"repos_url": "https://api.github.com/users/iamNCJ/repos",
"events_url": "https://api.github.com/users/iamNCJ/events{/privacy}",
"received_events_url": "https://api.github.com/users/iamNCJ/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Succeed to reproduce with another model and dataset.\r\n\r\nhttps://gist.github.com/iamNCJ/a30afcbac392f6036bed65198ce5295e\r\n\r\n[gist](https://gist.github.com/iamNCJ/a30afcbac392f6036bed65198ce5295e)\r\n\r\nThis gist is derived from [an example provided by the pytorch lightening team](https://colab.research.google.com/github/PytorchLightning/pytorch-lightning/blob/master/notebooks/04-transformers-text-classification.ipynb), but it also causes this problem with multiple gpus.\r\n\r\nOutput:\r\n```text\r\nTraceback (most recent call last): File \"glue.py\", line 272, in <module>\r\n trainer.fit(model, dm)\r\n File \"/home/<user>/.conda/envs/RACE/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py\", line 498, in fit\r\n self.dispatch()\r\n File \"/home/<user>/.conda/envs/RACE/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py\", line 545, in dispatch\r\n self.accelerator.start_training(self)\r\n File \"/home/<user>/.conda/envs/RACE/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py\", line 73, in start_training\r\n self.training_type_plugin.start_training(trainer)\r\n File \"/home/<user>/.conda/envs/RACE/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/ddp_spawn.py\", line 106, in start_training\r\n mp.spawn(self.new_process, **self.mp_spawn_kwargs)\r\n File \"/home/<user>/.conda/envs/RACE/lib/python3.8/site-packages/torch/multiprocessing/spawn.py\", line 230, in spawn\r\n return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')\r\n File \"/home/<user>/.conda/envs/RACE/lib/python3.8/site-packages/torch/multiprocessing/spawn.py\", line 179, in start_processes\r\n process.start()\r\n File \"/home/<user>/.conda/envs/RACE/lib/python3.8/multiprocessing/process.py\", line 121, in start\r\n self._popen = self._Popen(self)\r\n File \"/home/<user>/.conda/envs/RACE/lib/python3.8/multiprocessing/context.py\", line 284, in _Popen\r\n return Popen(process_obj)\r\n File \"/home/<user>/.conda/envs/RACE/lib/python3.8/multiprocessing/popen_spawn_posix.py\", line 32, in __init__\r\n super().__init__(process_obj)\r\n File \"/home/<user>/.conda/envs/RACE/lib/python3.8/multiprocessing/popen_fork.py\", line 19, in __init__\r\n self._launch(process_obj)\r\n File \"/home/<user>/.conda/envs/RACE/lib/python3.8/multiprocessing/popen_spawn_posix.py\", line 47, in _launch\r\n reduction.dump(process_obj, fp)\r\n File \"/home/<user>/.conda/envs/RACE/lib/python3.8/multiprocessing/reduction.py\", line 60, in dump\r\n ForkingPickler(file, protocol).dump(obj)\r\nAttributeError: Can't pickle local object 'get_linear_schedule_with_warmup.<locals>.lr_lambda'\r\n```",
"I succeed to start training after using DDP instead of DDP Spawn, since DDP Spawn forces the model to be pickleable but DDP doesn't, but I still wonder if it's possible to make the scheduler pickleable.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,616 | 1,619 | 1,619 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.4.2
- Platform: Linux-3.10.0-1127.13.1.el7.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.8 (Anaconda)
- PyTorch version (GPU): 1.8.0+cu111 (True)
- Tensorflow version (GPU): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help
This seems to be a bug concerning the `Optimization` class.
## Information
Model I am using (Bert, XLNet ...): BertMultipleChoice
The problem arises when using:
* [ ] my own modified scripts: I'm using transformers with Pytorch Lightning, and the distributed training function is provided by PyTorch Lightening.
The tasks I am working on is:
* Reading Comprehensive on RACE Dataset
## To reproduce
Steps to reproduce the behavior:
1. load RACE into a datamodule
2. finetune BertMultipleChoice on this datamodule
3. start training with `gpus=-1`
Output:
```text
Traceback (most recent call last):
File "train.local.py", line 35, in <module>
trainer.fit(model, dm)
File "/home/<user>/.conda/envs/RACE/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 498, in fit
self.dispatch()
File "/home/<user>/.conda/envs/RACE/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 545, in dispatch
self.accelerator.start_training(self)
File "/home/<user>/.conda/envs/RACE/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 73, in start_training
self.training_type_plugin.start_training(trainer)
File "/home/<user>/.conda/envs/RACE/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/ddp_spawn.py", line 106, in start_training
mp.spawn(self.new_process, **self.mp_spawn_kwargs)
File "/home/<user>/.conda/envs/RACE/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 230, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/home/<user>/.conda/envs/RACE/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 179, in start_processes
process.start()
File "/home/<user>/.conda/envs/RACE/lib/python3.8/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
File "/home/<user>/.conda/envs/RACE/lib/python3.8/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/home/<user>/.conda/envs/RACE/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/home/<user>/.conda/envs/RACE/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/home/<user>/.conda/envs/RACE/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 47, in _launch
reduction.dump(process_obj, fp)
File "/home/<user>/.conda/envs/RACE/lib/python3.8/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'get_linear_schedule_with_warmup.<locals>.lr_lambda'
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
It should start training on all gpus. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10880/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10880/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10879 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10879/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10879/comments | https://api.github.com/repos/huggingface/transformers/issues/10879/events | https://github.com/huggingface/transformers/pull/10879 | 839,393,774 | MDExOlB1bGxSZXF1ZXN0NTk5NDE2Nzky | 10,879 | error type of tokenizer in __init__ definition | {
"login": "ZhengZixiang",
"id": 19514611,
"node_id": "MDQ6VXNlcjE5NTE0NjEx",
"avatar_url": "https://avatars.githubusercontent.com/u/19514611?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZhengZixiang",
"html_url": "https://github.com/ZhengZixiang",
"followers_url": "https://api.github.com/users/ZhengZixiang/followers",
"following_url": "https://api.github.com/users/ZhengZixiang/following{/other_user}",
"gists_url": "https://api.github.com/users/ZhengZixiang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZhengZixiang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZhengZixiang/subscriptions",
"organizations_url": "https://api.github.com/users/ZhengZixiang/orgs",
"repos_url": "https://api.github.com/users/ZhengZixiang/repos",
"events_url": "https://api.github.com/users/ZhengZixiang/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZhengZixiang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,616 | 1,616 | CONTRIBUTOR | null | the orignal code in line 246 is
```
tokenizer: Optional["PreTrainedTokenizerBase"] = None,
```
it should be
```
tokenizer: Optional[PreTrainedTokenizerBase] = None,
```
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10879/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10879/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10879",
"html_url": "https://github.com/huggingface/transformers/pull/10879",
"diff_url": "https://github.com/huggingface/transformers/pull/10879.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10879.patch",
"merged_at": 1616598014000
} |
https://api.github.com/repos/huggingface/transformers/issues/10878 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10878/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10878/comments | https://api.github.com/repos/huggingface/transformers/issues/10878/events | https://github.com/huggingface/transformers/issues/10878 | 839,325,292 | MDU6SXNzdWU4MzkzMjUyOTI= | 10,878 | RuntimeError: while running run_common_voice.py (XLSR wav2vec finetuning week) | {
"login": "raja1196",
"id": 23166164,
"node_id": "MDQ6VXNlcjIzMTY2MTY0",
"avatar_url": "https://avatars.githubusercontent.com/u/23166164?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/raja1196",
"html_url": "https://github.com/raja1196",
"followers_url": "https://api.github.com/users/raja1196/followers",
"following_url": "https://api.github.com/users/raja1196/following{/other_user}",
"gists_url": "https://api.github.com/users/raja1196/gists{/gist_id}",
"starred_url": "https://api.github.com/users/raja1196/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/raja1196/subscriptions",
"organizations_url": "https://api.github.com/users/raja1196/orgs",
"repos_url": "https://api.github.com/users/raja1196/repos",
"events_url": "https://api.github.com/users/raja1196/events{/privacy}",
"received_events_url": "https://api.github.com/users/raja1196/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I am experiencing the same error. I have been all the day working around without solving it. I have tried trough Docker Cuda versions 9.2, 10.0 and 10.1 and different versions of pytorch including 1.3, 1.5 and 1.6. I have tried with different combinations of GTX1080 and RTX2080\r\n\r\nAdding \"--ddp_find_unused_parameters=true\" to the python command does not fix the error.\r\n\r\nAny help is really appreciated as I am working on the fine-tuning week @patrickvonplaten ",
"I am experience this error too.\r\nCUDA 11.2\r\n4xT4 - 16Gb\r\n`--dataset_config_name=\"ru\"`",
"@raja1196 I think I have found the bug. Could you try modifying in run_common_voice.py the gradient_checkpointing to False, as it is written below:\r\n\r\n```\r\ngradient_checkpointing: Optional[bool] = field(\r\n default=False,\r\n metadata={\r\n \"help\": \"If True, use gradient checkpointing to save memory at the expense of slower backward pass.\"\r\n },\r\n )\r\n```\r\n\r\nAnd then running the script without gradient_checkpointing as follows:\r\n\r\n`python -m torch.distributed.launch \\ --nproc_per_node 4 run_common_voice.py \\ --model_name_or_path=\"facebook/wav2vec2-large-xlsr-53\" \\ --dataset_config_name=\"tr\" \\ # use this argument to specify the language code --output_dir=./wav2vec2-large-xlsr-turkish-demo \\ --overwrite_output_dir \\ --num_train_epochs=\"5\" \\ --per_device_train_batch_size=\"16\" \\ --learning_rate=\"3e-4\" \\ --warmup_steps=\"500\" \\ --evaluation_strategy=\"steps\" \\ --save_steps=\"400\" \\ --eval_steps=\"400\" \\ --logging_steps=\"400\" \\ --save_total_limit=\"3\" \\ --freeze_feature_extractor \\ --feat_proj_dropout=\"0.0\" \\ --layerdrop=\"0.1\" \\ --fp16 \\ --group_by_length \\ --do_train --do_eval`\r\n\r\nThis solves the problem in my case and now I am able to run it with two GPUs. If it works to you, I will do PR",
"@ivangtorre s solution works. unfortunately, I have to reduce the batchsize quite a lot.\r\n\r\nUpdate: I stopped using distributed training for now, as I did not get any performance gains somehow. Does anyone know whether the CTC loss of this model is computed in a distributed way, or are the outputs gathered on a single gpu before computing loss?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,616 | 1,619 | 1,619 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.0.dev0 (I tried running it on 4.4.0 as well, gave the same error)
- Platform: Ubuntu (running on a virtual machine)
- Python version: 3.8
- PyTorch version (GPU?): 1.6.0
- Using GPU in script?: yes, running [this script](https://github.com/huggingface/transformers/blob/master/examples/research_projects/wav2vec2/run_common_voice.py)
- Using distributed or parallel set-up in script?: Distributed
### Who can help
@patrickvonplaten (as per the message on slack group)
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
- [ ] the official example scripts: (give details below)
- [ ] my own modified scripts: (give details below)
Tried running both official command and modified script (running command changed based on the language)
The tasks I am working on is
- [ ] common voice dataset (ta)
## To reproduce
Steps to reproduce the behavior:
1. run common voice script [from here](https://github.com/huggingface/transformers/blob/master/examples/research_projects/wav2vec2/run_common_voice.py)
2. For multi-gpu setup I used this command `python -m torch.distributed.launch \
--nproc_per_node 4 run_common_voice.py \
--model_name_or_path="facebook/wav2vec2-large-xlsr-53" \
--dataset_config_name="tr" \ # use this argument to specify the language code
--output_dir=./wav2vec2-large-xlsr-turkish-demo \
--overwrite_output_dir \
--num_train_epochs="5" \
--per_device_train_batch_size="16" \
--learning_rate="3e-4" \
--warmup_steps="500" \
--evaluation_strategy="steps" \
--save_steps="400" \
--eval_steps="400" \
--logging_steps="400" \
--save_total_limit="3" \
--freeze_feature_extractor \
--feat_proj_dropout="0.0" \
--layerdrop="0.1" \
--gradient_checkpointing \
--fp16 \
--group_by_length \
--do_train --do_eval `
## Error:
`RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument 'find_unused_parameters=True' to 'torch.nn.parallel.DistributedDataParallel'; (2) making sure all 'forward' function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's 'forward' function. Please include the loss function and the structure of the return value of 'forward' of your module when reporting this issue (e.g. list, dict, iterable).`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Model would train without any error
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10878/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10878/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10877 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10877/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10877/comments | https://api.github.com/repos/huggingface/transformers/issues/10877/events | https://github.com/huggingface/transformers/issues/10877 | 839,293,419 | MDU6SXNzdWU4MzkyOTM0MTk= | 10,877 | `XLMRobertaTokenizer` `encode_plus` api producing `<unk>` for a valid token | {
"login": "abgoswam",
"id": 8822956,
"node_id": "MDQ6VXNlcjg4MjI5NTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8822956?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abgoswam",
"html_url": "https://github.com/abgoswam",
"followers_url": "https://api.github.com/users/abgoswam/followers",
"following_url": "https://api.github.com/users/abgoswam/following{/other_user}",
"gists_url": "https://api.github.com/users/abgoswam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abgoswam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abgoswam/subscriptions",
"organizations_url": "https://api.github.com/users/abgoswam/orgs",
"repos_url": "https://api.github.com/users/abgoswam/repos",
"events_url": "https://api.github.com/users/abgoswam/events{/privacy}",
"received_events_url": "https://api.github.com/users/abgoswam/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, thanks for opening an issue! Seen with @n1to, and this comes from the Unigram-based tokenizers: Unigram's tokenize cuts the string into tokens, and then converts them to IDs. Unknown tokens are detected during the token to IDs conversion, rather than when the string is cut into tokens.\r\n\r\nThis is different to BPE, where the string is cut in a lot of independant characters, converted to IDs, then merged to gether.\r\n\r\nThis is also different to WordPiece, where we start from the word and cut it until we find a token representation for each word piece; if we don't, then that's unknown.",
"hi @LysandreJik . Thanks for looking into this, and sharing the info\r\n\r\nbased on your response it seems that for the `XLMRobertaTokenizer` tokenizer, we **cannot** guarantee that the following holds:\r\n\r\n```Python\r\nassert tokenizer.decode(tokenizer.convert_tokens_to_ids(tokenizer.tokenize(text))) == text\r\n```\r\n\r\nam i right ?",
"I believe that's true for any tokenizer. If the tokenizer cannot tokenize one part of your text as it is not part of your vocabulary, then some information is lost.",
"Hey guys,\r\n\r\nfor the sake of completeness, here's the double check with the reference implementation/tokenizer:\r\n\r\n```python\r\nimport torch\r\nxlmr = torch.hub.load('pytorch/fairseq', 'xlmr.base')\r\nxlmr.eval()\r\n\r\ntokens = xlmr.encode('请在黄鹂餐厅预订今晚7点半的位置。')\r\n```\r\n\r\nIt outputs:\r\n\r\n```bash\r\ntensor([ 0, 6, 9736, 213, 19390, 3, 113638, 209093, 155755,\r\n 966, 2391, 6193, 57486, 30, 2])\r\n```\r\n\r\n3 is the id for the unknown token, but you \"reverse\" tokenization with:\r\n\r\n```python\r\nxlmr.decode(tokens)\r\n```\r\n\r\nThis outputs:\r\n\r\n```bash\r\n'请在黄<unk>餐厅预订今晚7点半的位置。'\r\n```\r\n\r\nSo the `<unk>` token also appears :)",
"@LysandreJik . agree that for any tokenizer, some information loss might happen, if the token is not part of the vocab.\r\n\r\nI guess, `SentencePiece` tokenizer is unique in a way : in the sense that \r\n\r\n- `SentencePieceProcessor provides a lossless data conversion that allows the original raw sentence to be perfectly reconstructed from the encoded data, i.e., Decode(Encode(input)) == input.`\r\n- where, Encode and Decode correspond to tokenization and de-tokenization respectively.\r\n- https://github.com/google/sentencepiece/blob/bc53923a9147dc8ffa54034c8ed774de78cc4d39/src/sentencepiece_processor.h#L118\r\n\r\nBecause of this, in the `tokenize` api for `XLMRobertaTokenizer`, there is no `<unk>` when the string is being cut into tokens \r\n\r\nBut, in the `encode` api when the tokens are converted to ids, `<unk>` are permitted as @stefan-it confirmed.\r\n\r\nhttps://github.com/google/sentencepiece/blob/9cf136582d9cce492ba5a0cfb775f9e777fe07ea/src/unigram_model.cc#L433 \r\n",
"Thanks folks for the discussion and insight into the behaviour of tokenizers in HF. \r\n\r\nClosing this issue, since its not a bug per se.",
"\r\nhi guys.\r\n\r\nI try to reproduce the code that is at the beginning of the topic and I get the following:\r\n\r\n"
] | 1,616 | 1,619 | 1,617 | NONE | null | ## Environment info
- `transformers` version: 4.5.0.dev0 (latest master)
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.7.10
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik
## Information
`XLMRobertaTokenizer` `encode_plus` api producing `<unk>` for a valid token
## To reproduce
```Python
from transformers import XLMRobertaTokenizer
tokenizer = XLMRobertaTokenizer.from_pretrained("xlm-roberta-base")
text = "请在黄鹂餐厅预订今晚7点半的位置。"
toks = tokenizer.tokenize(text)
assert toks == ['▁', '请', '在', '黄', '鹂', '餐厅', '预订', '今晚', '7', '点', '半', '的位置', '。']
output = tokenizer.encode_plus(text, add_special_tokens=False)
toks_converted = tokenizer.convert_ids_to_tokens(output['input_ids'])
assert toks_converted == ['▁', '请', '在', '黄', '<unk>', '餐厅', '预订', '今晚', '7', '点', '半', '的位置', '。']
```
## Expected behavior
```Python
assert toks_converted[4] == '鹂' # not <unk>
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10877/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10877/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10876 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10876/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10876/comments | https://api.github.com/repos/huggingface/transformers/issues/10876/events | https://github.com/huggingface/transformers/pull/10876 | 839,211,286 | MDExOlB1bGxSZXF1ZXN0NTk5MjYyMzc0 | 10,876 | Add new notebook links in the docs | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Oh actually, an additional comment: the title of the summarization notebook is currently \"Text classification on GLUE\" (and same for the translation)",
"Fixed the titles and the sentence, so merging. Thanks for the review!"
] | 1,616 | 1,616 | 1,616 | COLLABORATOR | null | # What does this PR do?
This PR adds links to the three missing tasks in the notebooks page: multiple choice, translation and summarization. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10876/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10876/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10876",
"html_url": "https://github.com/huggingface/transformers/pull/10876",
"diff_url": "https://github.com/huggingface/transformers/pull/10876.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10876.patch",
"merged_at": 1616593508000
} |
https://api.github.com/repos/huggingface/transformers/issues/10875 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10875/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10875/comments | https://api.github.com/repos/huggingface/transformers/issues/10875/events | https://github.com/huggingface/transformers/pull/10875 | 839,187,923 | MDExOlB1bGxSZXF1ZXN0NTk5MjQyNTk1 | 10,875 | Fix test_trainer_distributed | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,616 | 1,616 | COLLABORATOR | null | # What does this PR do?
#10861 introduced a change in the way metrics are prefixed byt default in `Trainer.predict`, which in turn made `tests/test_trainer_distributed.py` fail. This PR fixes that.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10875/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10875/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10875",
"html_url": "https://github.com/huggingface/transformers/pull/10875",
"diff_url": "https://github.com/huggingface/transformers/pull/10875.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10875.patch",
"merged_at": 1616540586000
} |
https://api.github.com/repos/huggingface/transformers/issues/10874 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10874/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10874/comments | https://api.github.com/repos/huggingface/transformers/issues/10874/events | https://github.com/huggingface/transformers/issues/10874 | 839,082,427 | MDU6SXNzdWU4MzkwODI0Mjc= | 10,874 | transformers.models.auto.tokenization_auto | {
"login": "Sankalp1233",
"id": 38120178,
"node_id": "MDQ6VXNlcjM4MTIwMTc4",
"avatar_url": "https://avatars.githubusercontent.com/u/38120178?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sankalp1233",
"html_url": "https://github.com/Sankalp1233",
"followers_url": "https://api.github.com/users/Sankalp1233/followers",
"following_url": "https://api.github.com/users/Sankalp1233/following{/other_user}",
"gists_url": "https://api.github.com/users/Sankalp1233/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sankalp1233/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sankalp1233/subscriptions",
"organizations_url": "https://api.github.com/users/Sankalp1233/orgs",
"repos_url": "https://api.github.com/users/Sankalp1233/repos",
"events_url": "https://api.github.com/users/Sankalp1233/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sankalp1233/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Could you reproduce this in a colab so that we can take a look? Thanks!",
"I reworked my code so I didn't get the error anymore I'm honestly not sure how it got fixed",
"I think it maybe because I didn't restart the runtime\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,616 | 1,619 | 1,619 | NONE | null | I installed the version transformers 3.5.1 to get the version in GitHub using !pip3 install transformers==3.5.1 and !pip3 install transformers but then when I try to install SentenceTransofrmer using : from sentence_transformers import SentenceTransformer I get ModuleNotFoundError: No module named 'transformers.models.auto.tokenization_auto'. I am not sure how to resolve this issue. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10874/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10874/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10873 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10873/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10873/comments | https://api.github.com/repos/huggingface/transformers/issues/10873/events | https://github.com/huggingface/transformers/issues/10873 | 839,051,105 | MDU6SXNzdWU4MzkwNTExMDU= | 10,873 | Wav2Vec2/XLRS-Wav2Vec2 Pre-Training | {
"login": "xaiguy",
"id": 48219849,
"node_id": "MDQ6VXNlcjQ4MjE5ODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/48219849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xaiguy",
"html_url": "https://github.com/xaiguy",
"followers_url": "https://api.github.com/users/xaiguy/followers",
"following_url": "https://api.github.com/users/xaiguy/following{/other_user}",
"gists_url": "https://api.github.com/users/xaiguy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xaiguy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xaiguy/subscriptions",
"organizations_url": "https://api.github.com/users/xaiguy/orgs",
"repos_url": "https://api.github.com/users/xaiguy/repos",
"events_url": "https://api.github.com/users/xaiguy/events{/privacy}",
"received_events_url": "https://api.github.com/users/xaiguy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
},
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"Wav2Vec2 Pre-Training is more important.",
"I would also like to be able to pre-train a Wav2Vec2 model using my own raw audio files in a self-supervised way. It would be even better if I could use a pre-trained model as a starting point. Is there any way to do this currently?",
"@czonios Yes, you can fine-tune a Wav2Vec2 model! Please check [this blogpost](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) by @patrickvonplaten. \r\n\r\nPre-training is not available as of now.",
"Hey, \r\n\r\nWe should have Wav2Vec2 Pretraining added in ~2 weeks.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Unstale",
"#11306 is under way I think",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,616 | 1,623 | 1,623 | NONE | null | Dear 🤗-team,
I'd like to do pre-training with your implementation of Wav2Vec2 and/or XLRS-Wav2Vec2. I was wondering if there are any plans to add such scripts (or even a demo) to the repository?
PS: I already did pre-training in NVIDIA NeMo, but I'm having problems with porting my checkpoints. Being able to do everything within the Huggingface framework would be great.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10873/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10873/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10872 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10872/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10872/comments | https://api.github.com/repos/huggingface/transformers/issues/10872/events | https://github.com/huggingface/transformers/issues/10872 | 838,971,320 | MDU6SXNzdWU4Mzg5NzEzMjA= | 10,872 | Training GPT2 does not use GPU | {
"login": "jnehring",
"id": 10537540,
"node_id": "MDQ6VXNlcjEwNTM3NTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/10537540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jnehring",
"html_url": "https://github.com/jnehring",
"followers_url": "https://api.github.com/users/jnehring/followers",
"following_url": "https://api.github.com/users/jnehring/following{/other_user}",
"gists_url": "https://api.github.com/users/jnehring/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jnehring/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jnehring/subscriptions",
"organizations_url": "https://api.github.com/users/jnehring/orgs",
"repos_url": "https://api.github.com/users/jnehring/repos",
"events_url": "https://api.github.com/users/jnehring/events{/privacy}",
"received_events_url": "https://api.github.com/users/jnehring/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,616 | 1,619 | 1,619 | NONE | null | Im using [run_clm.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_clm.py) to train GPT2. When using the [example from the documetnation](https://github.com/huggingface/transformers/tree/master/examples/language-modeling) it works fine and uses the GPU. But when I start it on my custom dataset it does not use any GPUs. Can you give me a tip on how to get it use the GPUs, or what might be wrong?
this works and uses GPUs:
```
python run_clm.py \
--model_name_or_path gpt2 \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--do_train \
--do_eval \
--output_dir /netscratch/nehring/projects/opensubtitles/datadir/tmp \
--per_device_train_batch_size 2 \
--per_device_eval_batch_size 2
```
this starts training but it does not use GPUs:
```
python run_clm.py \
--model_type gpt2 \
--tokenizer_name gpt2 \
--train_file $DATA_PATH/train.txt \
--validation_file $DATA_PATH/valid.txt \
--do_train \
--do_eval \
--output_dir /netscratch/nehring/projects/opensubtitles/datadir/models/gpt2-small \
--per_device_train_batch_size 32 \
--per_device_eval_batch_size 32 \
--num_train_epochs 10
```
This is my environement as created by `transformers-cli env`. It says that I did not install tensorflow. But when I do `python -c 'import tensorflow as tf; print(tf.__version__)` then the commandline prints "1.15.0".
```
- `transformers` version: 4.5.0.dev0
- Platform: Linux-5.4.0-65-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.8.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: this is the problem
- Using distributed or parallel set-up in script?: no
```
This here is part of the output of run_clm.py. it says `_n_gpu=6`. So the GPUs are detected but for some reason they are not used.
```
03/23/2021 18:34:14 - INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir=/netscratch/nehring/projects/opensubtitles/datadir/models/gpt2-small, overwrite_output_dir=False, do_train=True, do_eval=True, do_predict=False, evaluation_strategy=IntervalStrategy.NO, prediction_loss_only=False, per_device_train_batch_size=32, per_device_eval_batch_size=32, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=10.0, max_steps=-1, lr_scheduler_type=SchedulerType.LINEAR, warmup_ratio=0.0, warmup_steps=0, logging_dir=runs/Mar23_18-34-14_graz, logging_strategy=IntervalStrategy.STEPS, logging_first_step=False, logging_steps=500, save_strategy=IntervalStrategy.STEPS, save_steps=500, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level=O1, fp16_backend=auto, fp16_full_eval=False, local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name=/netscratch/nehring/projects/opensubtitles/datadir/models/gpt2-small, disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, sharded_ddp=[], deepspeed=None, label_smoothing_factor=0.0, adafactor=False, group_by_length=False, report_to=['tensorboard'], ddp_find_unused_parameters=None, dataloader_pin_memory=True, skip_memory_metrics=False, _n_gpu=6)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10872/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10872/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10871 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10871/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10871/comments | https://api.github.com/repos/huggingface/transformers/issues/10871/events | https://github.com/huggingface/transformers/issues/10871 | 838,969,705 | MDU6SXNzdWU4Mzg5Njk3MDU= | 10,871 | not created config.json in Wav2Vec2ForCTC for ASR | {
"login": "Kowsher",
"id": 16461536,
"node_id": "MDQ6VXNlcjE2NDYxNTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/16461536?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Kowsher",
"html_url": "https://github.com/Kowsher",
"followers_url": "https://api.github.com/users/Kowsher/followers",
"following_url": "https://api.github.com/users/Kowsher/following{/other_user}",
"gists_url": "https://api.github.com/users/Kowsher/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Kowsher/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Kowsher/subscriptions",
"organizations_url": "https://api.github.com/users/Kowsher/orgs",
"repos_url": "https://api.github.com/users/Kowsher/repos",
"events_url": "https://api.github.com/users/Kowsher/events{/privacy}",
"received_events_url": "https://api.github.com/users/Kowsher/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,616 | 1,619 | 1,619 | NONE | null | After saving the trained file I can to read the model. Here gives the error
/content/gdrive/MyDrive/wav2vec2-large-xlsr-hindi' is the correct path to a directory containing a config.json file
Here is my reading model
model = Wav2Vec2ForCTC.from_pretrained("/content/gdrive/MyDrive/wav2vec2-large-xlsr-hindi").to("cuda")
from transformers import TrainingArguments
training_args = TrainingArguments(
output_dir="/content/gdrive/MyDrive/wav2vec2-large-xlsr-hindi",
group_by_length=True,
per_device_train_batch_size=16,
gradient_accumulation_steps=2,
evaluation_strategy="steps",
num_train_epochs=30,
fp16=True,
save_steps=400, #this would mean every 400 steps model gets saved which also means Google drive gets full
eval_steps=400,
logging_steps=400,
#learning_rate=3e-4,
learning_rate=0.1, # this is just for demo
warmup_steps=500,
save_total_limit=2,
)
In my saved model there is not created config.json | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10871/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10871/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10870 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10870/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10870/comments | https://api.github.com/repos/huggingface/transformers/issues/10870/events | https://github.com/huggingface/transformers/pull/10870 | 838,944,405 | MDExOlB1bGxSZXF1ZXN0NTk5MDM5Mjg2 | 10,870 | Sm trainer smp init fix | {
"login": "philschmid",
"id": 32632186,
"node_id": "MDQ6VXNlcjMyNjMyMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/philschmid",
"html_url": "https://github.com/philschmid",
"followers_url": "https://api.github.com/users/philschmid/followers",
"following_url": "https://api.github.com/users/philschmid/following{/other_user}",
"gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/philschmid/subscriptions",
"organizations_url": "https://api.github.com/users/philschmid/orgs",
"repos_url": "https://api.github.com/users/philschmid/repos",
"events_url": "https://api.github.com/users/philschmid/events{/privacy}",
"received_events_url": "https://api.github.com/users/philschmid/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"LGTM! I ran one training job successfully with these changes. ",
"I tested it with `pytorch1.7.1`\r\n`564829616587.dkr.ecr.us-east-1.amazonaws.com/huggingface-pytorch-training:1.7.1-transformers4.4.0-py36-gpu-cu110-ubuntu18.04` \r\nand `pytorch1.6` \r\n`763104351884.dkr.ecr.us-east-1.amazonaws.com/huggingface-pytorch-training:1.6.0-transformers4.4.2-gpu-py36-cu110-ubuntu18.04`\r\n"
] | 1,616 | 1,616 | 1,616 | MEMBER | null | # What does this PR do?
Fixes `SageMakerTrainer` `smp.init` for `smp 1.3`. It also removes the `is_smdistributed_available` for are more robust `is_sagemaker_model_parallel_available`.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10870/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10870/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10870",
"html_url": "https://github.com/huggingface/transformers/pull/10870",
"diff_url": "https://github.com/huggingface/transformers/pull/10870.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10870.patch",
"merged_at": 1616526475000
} |
https://api.github.com/repos/huggingface/transformers/issues/10869 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10869/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10869/comments | https://api.github.com/repos/huggingface/transformers/issues/10869/events | https://github.com/huggingface/transformers/issues/10869 | 838,938,130 | MDU6SXNzdWU4Mzg5MzgxMzA= | 10,869 | Camembert-base MaskedLM has different config settings that actual camambert-base | {
"login": "fatemerhmi",
"id": 16163952,
"node_id": "MDQ6VXNlcjE2MTYzOTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/16163952?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fatemerhmi",
"html_url": "https://github.com/fatemerhmi",
"followers_url": "https://api.github.com/users/fatemerhmi/followers",
"following_url": "https://api.github.com/users/fatemerhmi/following{/other_user}",
"gists_url": "https://api.github.com/users/fatemerhmi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fatemerhmi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fatemerhmi/subscriptions",
"organizations_url": "https://api.github.com/users/fatemerhmi/orgs",
"repos_url": "https://api.github.com/users/fatemerhmi/repos",
"events_url": "https://api.github.com/users/fatemerhmi/events{/privacy}",
"received_events_url": "https://api.github.com/users/fatemerhmi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"This is still a problem and if someone could address this would be great. \r\n",
"Hello! You're instantiating a `CamembertConfig` without specifying any parameters, so it is initialized with the defaults (which are based on the BERT architecture as the configuration inherits from it).\r\n\r\nIt is not expected to be the same as `camembert-base`, nor is it specified in the documentation.\r\n\r\nIf you would like to obtain a configuration object that is the exact same as `camembert-base`, I would recommend instantiating your configuration object from that checkpoint:\r\n\r\n```py\r\nfrom transformers import CamembertConfig\r\nconfig = CamembertConfig.from_pretrained(\"camembert-base\")\r\n```\r\n\r\nYou won't have a problem to load the model then:\r\n\r\n```py\r\nfrom transformers import CamembertForMaskedLM\r\nmodel = CamembertForMaskedLM.from_pretrained(\"camembert-base\", config=config)\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,616 | 1,622 | 1,622 | NONE | null | ## Environment info
- `transformers` version: 4.1.1
- Platform: Linux-4.15.0-45-generic-x86_64-with-debian-10.2
- Python version: 3.7.3
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@LysandreJik @sgugger
## Information
Model I am using: [CamemBERT](https://huggingface.co/transformers/model_doc/camembert.html#camembert):
The problem arises when using:
[CamembertForMaskedLM](https://huggingface.co/transformers/model_doc/camembert.html#camembertformaskedlm)
The tasks I am working on is:
I am training Camambert model with MaskedLM head. (using a private dataset)
## To reproduce
Steps to reproduce the behaviour:
1. load camambert config file:
```python
from transformers import CamembertConfig
config = CamembertConfig()
config
```
output:
```
CamembertConfig {
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"eos_token_id": 2,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "camembert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"type_vocab_size": 2,
"vocab_size": 30522
}
```
2. load camambert tokenizer
```python
from transformers import CamembertTokenizer
tokenizer = CamembertTokenizer.from_pretrained(TOKENIZER_DIR)
```
3. load camembert for MLM
```
from transformers import CamembertForMaskedLM
model = CamembertForMaskedLM.from_pretrained( model_name_or_path, config=config)
```
output:
```python
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-94-3a1a4ae80b3a> in <module>
1 from transformers import CamembertForMaskedLM
----> 2 model = CamembertForMaskedLM.from_pretrained( model_name_or_path, config=config)
/usr/local/lib/python3.7/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
1154 raise RuntimeError(
1155 "Error(s) in loading state_dict for {}:\n\t{}".format(
-> 1156 model.__class__.__name__, "\n\t".join(error_msgs)
1157 )
1158 )
RuntimeError: Error(s) in loading state_dict for CamembertForMaskedLM:
size mismatch for roberta.embeddings.word_embeddings.weight: copying a param with shape torch.Size([32005, 768]) from checkpoint, the shape in current model is torch.Size([30522, 768]).
size mismatch for roberta.embeddings.position_embeddings.weight: copying a param with shape torch.Size([514, 768]) from checkpoint, the shape in current model is torch.Size([512, 768]).
size mismatch for roberta.embeddings.token_type_embeddings.weight: copying a param with shape torch.Size([1, 768]) from checkpoint, the shape in current model is torch.Size([2, 768]).
size mismatch for lm_head.bias: copying a param with shape torch.Size([32005]) from checkpoint, the shape in current model is torch.Size([30522]).
size mismatch for lm_head.decoder.weight: copying a param with shape torch.Size([32005, 768]) from checkpoint, the shape in current model is torch.Size([30522, 768]).
```
## Expected behavior
If I replace step 3 with:
```python
from transformers import CamembertForMaskedLM
model = CamembertForMaskedLM.from_pretrained( model_name_or_path)
````
I won't receive any error, but it's not the correct config details (`model.config`) when I print out the config details:
output:
```
CamembertConfig {
"_name_or_path": "./models_weight/camembert-base",
"architectures": [
"CamembertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 5,
"eos_token_id": 6,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-05,
"max_position_embeddings": 514,
"model_type": "camembert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"output_past": true,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"type_vocab_size": 1,
"vocab_size": 32005
}
```
The correct camembert config is provided [here](https://huggingface.co/camembert-base/resolve/main/config.json). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10869/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10869/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10868 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10868/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10868/comments | https://api.github.com/repos/huggingface/transformers/issues/10868/events | https://github.com/huggingface/transformers/pull/10868 | 838,875,652 | MDExOlB1bGxSZXF1ZXN0NTk4OTgxNjY5 | 10,868 | [Examples] Added predict stage and Updated Example Template | {
"login": "bhadreshpsavani",
"id": 26653468,
"node_id": "MDQ6VXNlcjI2NjUzNDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhadreshpsavani",
"html_url": "https://github.com/bhadreshpsavani",
"followers_url": "https://api.github.com/users/bhadreshpsavani/followers",
"following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}",
"gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions",
"organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs",
"repos_url": "https://api.github.com/users/bhadreshpsavani/repos",
"events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,616 | 1,616 | CONTRIBUTOR | null | # What does this PR do?
* Adds Predict stage in `run_xlni.py` text-classification example
* Updated Example Template for Predict Stage
Fixes #10482
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? #10482
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@stas00 @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10868/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10868/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10868",
"html_url": "https://github.com/huggingface/transformers/pull/10868",
"diff_url": "https://github.com/huggingface/transformers/pull/10868.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10868.patch",
"merged_at": 1616521079000
} |
https://api.github.com/repos/huggingface/transformers/issues/10867 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10867/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10867/comments | https://api.github.com/repos/huggingface/transformers/issues/10867/events | https://github.com/huggingface/transformers/pull/10867 | 838,815,393 | MDExOlB1bGxSZXF1ZXN0NTk4OTMwODY4 | 10,867 | Amazon SageMaker Documentation | {
"login": "philschmid",
"id": 32632186,
"node_id": "MDQ6VXNlcjMyNjMyMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/philschmid",
"html_url": "https://github.com/philschmid",
"followers_url": "https://api.github.com/users/philschmid/followers",
"following_url": "https://api.github.com/users/philschmid/following{/other_user}",
"gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/philschmid/subscriptions",
"organizations_url": "https://api.github.com/users/philschmid/orgs",
"repos_url": "https://api.github.com/users/philschmid/repos",
"events_url": "https://api.github.com/users/philschmid/events{/privacy}",
"received_events_url": "https://api.github.com/users/philschmid/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,616 | 1,616 | MEMBER | null | # What does this PR do?
Adds the Documentation page for "Run training on Amazon SageMaker". | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10867/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10867/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10867",
"html_url": "https://github.com/huggingface/transformers/pull/10867",
"diff_url": "https://github.com/huggingface/transformers/pull/10867.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10867.patch",
"merged_at": 1616511404000
} |
https://api.github.com/repos/huggingface/transformers/issues/10866 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10866/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10866/comments | https://api.github.com/repos/huggingface/transformers/issues/10866/events | https://github.com/huggingface/transformers/pull/10866 | 838,768,842 | MDExOlB1bGxSZXF1ZXN0NTk4ODkxODEz | 10,866 | add processing "cache" and augmentation | {
"login": "flozi00",
"id": 47894090,
"node_id": "MDQ6VXNlcjQ3ODk0MDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flozi00",
"html_url": "https://github.com/flozi00",
"followers_url": "https://api.github.com/users/flozi00/followers",
"following_url": "https://api.github.com/users/flozi00/following{/other_user}",
"gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flozi00/subscriptions",
"organizations_url": "https://api.github.com/users/flozi00/orgs",
"repos_url": "https://api.github.com/users/flozi00/repos",
"events_url": "https://api.github.com/users/flozi00/events{/privacy}",
"received_events_url": "https://api.github.com/users/flozi00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @flozi00,\r\n\r\nCould you make the code quality test pass -> then I think we can merge this one :-)"
] | 1,616 | 1,619 | 1,619 | CONTRIBUTOR | null | # What does this PR do?
This PR stores the resampled commoinvoice on disk to speedup multiple runs.
Furthermore it adds data augmentation to double the dataset size
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten @patil-suraj
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10866/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10866/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10866",
"html_url": "https://github.com/huggingface/transformers/pull/10866",
"diff_url": "https://github.com/huggingface/transformers/pull/10866.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10866.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/10865 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10865/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10865/comments | https://api.github.com/repos/huggingface/transformers/issues/10865/events | https://github.com/huggingface/transformers/pull/10865 | 838,730,017 | MDExOlB1bGxSZXF1ZXN0NTk4ODYwMDYy | 10,865 | Update the example template for a no Trainer option | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,616 | 1,616 | COLLABORATOR | null | # What does this PR do?
Expand the template of new examples with a new option to build an example like the new [run_glue_no_trainer.py](https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_glue_no_trainer.py). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10865/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10865/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10865",
"html_url": "https://github.com/huggingface/transformers/pull/10865",
"diff_url": "https://github.com/huggingface/transformers/pull/10865.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10865.patch",
"merged_at": 1616508160000
} |
https://api.github.com/repos/huggingface/transformers/issues/10864 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10864/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10864/comments | https://api.github.com/repos/huggingface/transformers/issues/10864/events | https://github.com/huggingface/transformers/issues/10864 | 838,657,627 | MDU6SXNzdWU4Mzg2NTc2Mjc= | 10,864 | transformers import error | {
"login": "KYUNGGUK-CHOI",
"id": 55866896,
"node_id": "MDQ6VXNlcjU1ODY2ODk2",
"avatar_url": "https://avatars.githubusercontent.com/u/55866896?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KYUNGGUK-CHOI",
"html_url": "https://github.com/KYUNGGUK-CHOI",
"followers_url": "https://api.github.com/users/KYUNGGUK-CHOI/followers",
"following_url": "https://api.github.com/users/KYUNGGUK-CHOI/following{/other_user}",
"gists_url": "https://api.github.com/users/KYUNGGUK-CHOI/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KYUNGGUK-CHOI/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KYUNGGUK-CHOI/subscriptions",
"organizations_url": "https://api.github.com/users/KYUNGGUK-CHOI/orgs",
"repos_url": "https://api.github.com/users/KYUNGGUK-CHOI/repos",
"events_url": "https://api.github.com/users/KYUNGGUK-CHOI/events{/privacy}",
"received_events_url": "https://api.github.com/users/KYUNGGUK-CHOI/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! I would guess something is wrongly setup in your local environment; especially it it works in colab! Are you sure you're using the `python` from the same environment as your `pip3`?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,616 | 1,619 | 1,619 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.4.2
- Platform: MACOS
- Python version: Python 3.6.13
- PyTorch version (GPU?): 1.8.0 (cpu)
- Tensorflow version (GPU?): tensorflow-cpu 2.4.1
- Using GPU in script?: no
- Using distributed or parallel set-up in script?:no
### Who can help
@LysandreJik
I setup my conda envs as below
<img width="470" alt="스크린샷 2021-03-23 오후 8 57 52" src="https://user-images.githubusercontent.com/55866896/112142854-70890b80-8c1a-11eb-8752-14857d4529e7.png">
and there are all librarys needed list in !pip3 list
<img width="310" alt="스크린샷 2021-03-23 오후 9 05 00" src="https://user-images.githubusercontent.com/55866896/112143640-84813d00-8c1b-11eb-8e6c-735d0a57e1c2.png">
but whenever I try to import BertTokenizer(from transformers import BertTokenizer), ImportError occurs.
<img width="662" alt="스크린샷 2021-03-23 오후 8 59 13" src="https://user-images.githubusercontent.com/55866896/112142993-a0381380-8c1a-11eb-8ec1-722d71a947ed.png">
I tried all process in google colab , it works well
I have no idea why it does not work in my Macbook local environment
please help me | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10864/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10864/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10863 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10863/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10863/comments | https://api.github.com/repos/huggingface/transformers/issues/10863/events | https://github.com/huggingface/transformers/pull/10863 | 838,603,070 | MDExOlB1bGxSZXF1ZXN0NTk4NzU2ODE4 | 10,863 | Fix p_mask cls token masking in question-answering pipeline | {
"login": "mmaslankowska-neurosys",
"id": 77386734,
"node_id": "MDQ6VXNlcjc3Mzg2NzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/77386734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mmaslankowska-neurosys",
"html_url": "https://github.com/mmaslankowska-neurosys",
"followers_url": "https://api.github.com/users/mmaslankowska-neurosys/followers",
"following_url": "https://api.github.com/users/mmaslankowska-neurosys/following{/other_user}",
"gists_url": "https://api.github.com/users/mmaslankowska-neurosys/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mmaslankowska-neurosys/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mmaslankowska-neurosys/subscriptions",
"organizations_url": "https://api.github.com/users/mmaslankowska-neurosys/orgs",
"repos_url": "https://api.github.com/users/mmaslankowska-neurosys/repos",
"events_url": "https://api.github.com/users/mmaslankowska-neurosys/events{/privacy}",
"received_events_url": "https://api.github.com/users/mmaslankowska-neurosys/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,616 | 1,616 | CONTRIBUTOR | null | # What does this PR do?
It fixes a really small bug described in detail in the issue - it only adds a condition to the if statement responsible for unmasking the `cls_token_id` in the `p_mask` used in the question answering pipeline.
Fixes #10810
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10863/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10863/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10863",
"html_url": "https://github.com/huggingface/transformers/pull/10863",
"diff_url": "https://github.com/huggingface/transformers/pull/10863.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10863.patch",
"merged_at": 1616504920000
} |
https://api.github.com/repos/huggingface/transformers/issues/10862 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10862/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10862/comments | https://api.github.com/repos/huggingface/transformers/issues/10862/events | https://github.com/huggingface/transformers/pull/10862 | 838,480,609 | MDExOlB1bGxSZXF1ZXN0NTk4NjU1ODA3 | 10,862 | Fixed confusing order of args in generate() docstring | {
"login": "RafaelWO",
"id": 38643099,
"node_id": "MDQ6VXNlcjM4NjQzMDk5",
"avatar_url": "https://avatars.githubusercontent.com/u/38643099?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RafaelWO",
"html_url": "https://github.com/RafaelWO",
"followers_url": "https://api.github.com/users/RafaelWO/followers",
"following_url": "https://api.github.com/users/RafaelWO/following{/other_user}",
"gists_url": "https://api.github.com/users/RafaelWO/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RafaelWO/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RafaelWO/subscriptions",
"organizations_url": "https://api.github.com/users/RafaelWO/orgs",
"repos_url": "https://api.github.com/users/RafaelWO/repos",
"events_url": "https://api.github.com/users/RafaelWO/events{/privacy}",
"received_events_url": "https://api.github.com/users/RafaelWO/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,616 | 1,616 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR addresses a (IMO) confusing parameter description in the docstring of `generate()`. Specifically, it is about the parameter `prefix_allowed_tokens_fn` which has to be of type `Callable[[int, torch.Tensor], List[int]]`. Since the description says _"This function takes 2 arguments `inputs_ids` and the batch ID `batch_id`"_ I created a function
```Python
def restrict_vocab(input_ids, batch_id): # incorrect!
# logic
```
But then I realised that the order of the parameters is wrong (the type hint would indicate that `batch_id` comes first though):
```Python
def restrict_vocab(batch_id, input_ids): # correct :)
# logic
```
Therefore, I fixed the order of the parameters in the description of `prefix_allowed_tokens_fn`, i.e. exchanged `inputs_ids` with `batch_id`. Now it should be easier to read and understand.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
Documentation -> @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10862/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10862/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10862",
"html_url": "https://github.com/huggingface/transformers/pull/10862",
"diff_url": "https://github.com/huggingface/transformers/pull/10862.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10862.patch",
"merged_at": 1616521702000
} |
https://api.github.com/repos/huggingface/transformers/issues/10861 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10861/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10861/comments | https://api.github.com/repos/huggingface/transformers/issues/10861/events | https://github.com/huggingface/transformers/pull/10861 | 838,348,266 | MDExOlB1bGxSZXF1ZXN0NTk4NTQzMjUw | 10,861 | [trainer] Fixes Typo in Predict Method of Trainer | {
"login": "bhadreshpsavani",
"id": 26653468,
"node_id": "MDQ6VXNlcjI2NjUzNDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhadreshpsavani",
"html_url": "https://github.com/bhadreshpsavani",
"followers_url": "https://api.github.com/users/bhadreshpsavani/followers",
"following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}",
"gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions",
"organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs",
"repos_url": "https://api.github.com/users/bhadreshpsavani/repos",
"events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,616 | 1,616 | CONTRIBUTOR | null | # What does this PR do?
Fixes typo in Predict Method of Trainer. This will enable saving files with the correct prefix `test` in Predict stage. Earlier it was saving it with the `eval` prefix for both predict and evaluate stage.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? This is discussed in #10482
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?:
@stas00 @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10861/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10861/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10861",
"html_url": "https://github.com/huggingface/transformers/pull/10861",
"diff_url": "https://github.com/huggingface/transformers/pull/10861.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10861.patch",
"merged_at": 1616501728000
} |
https://api.github.com/repos/huggingface/transformers/issues/10860 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10860/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10860/comments | https://api.github.com/repos/huggingface/transformers/issues/10860/events | https://github.com/huggingface/transformers/issues/10860 | 838,319,736 | MDU6SXNzdWU4MzgzMTk3MzY= | 10,860 | The exact English pretraining data and Chinese pretraining data that are exact same to the BERT paper's pretraining data. | {
"login": "guotong1988",
"id": 4702353,
"node_id": "MDQ6VXNlcjQ3MDIzNTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4702353?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guotong1988",
"html_url": "https://github.com/guotong1988",
"followers_url": "https://api.github.com/users/guotong1988/followers",
"following_url": "https://api.github.com/users/guotong1988/following{/other_user}",
"gists_url": "https://api.github.com/users/guotong1988/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guotong1988/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guotong1988/subscriptions",
"organizations_url": "https://api.github.com/users/guotong1988/orgs",
"repos_url": "https://api.github.com/users/guotong1988/repos",
"events_url": "https://api.github.com/users/guotong1988/events{/privacy}",
"received_events_url": "https://api.github.com/users/guotong1988/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,616 | 1,619 | 1,619 | CONTRIBUTOR | null | (Sorry I can not visit the forum.)
Any one know where to get them?
Thank you and thank you.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10860/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10860/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10859 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10859/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10859/comments | https://api.github.com/repos/huggingface/transformers/issues/10859/events | https://github.com/huggingface/transformers/pull/10859 | 838,290,770 | MDExOlB1bGxSZXF1ZXN0NTk4NDk2MDUx | 10,859 | [file_utils] import refactor | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,616 | 1,616 | CONTRIBUTOR | null | This is just as a small import code refactor which currently looks a bit odd due to 8 levels of nesting - no functional change.
@LysandreJik, @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10859/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10859/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10859",
"html_url": "https://github.com/huggingface/transformers/pull/10859",
"diff_url": "https://github.com/huggingface/transformers/pull/10859.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10859.patch",
"merged_at": 1616517701000
} |
https://api.github.com/repos/huggingface/transformers/issues/10858 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10858/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10858/comments | https://api.github.com/repos/huggingface/transformers/issues/10858/events | https://github.com/huggingface/transformers/issues/10858 | 838,268,704 | MDU6SXNzdWU4MzgyNjg3MDQ= | 10,858 | If run trainer._maybe_log_save_evaluate() twice continuously, it will appear “ZeroDivisionError: float division by zero” | {
"login": "niuzaisheng",
"id": 29062892,
"node_id": "MDQ6VXNlcjI5MDYyODky",
"avatar_url": "https://avatars.githubusercontent.com/u/29062892?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/niuzaisheng",
"html_url": "https://github.com/niuzaisheng",
"followers_url": "https://api.github.com/users/niuzaisheng/followers",
"following_url": "https://api.github.com/users/niuzaisheng/following{/other_user}",
"gists_url": "https://api.github.com/users/niuzaisheng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/niuzaisheng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/niuzaisheng/subscriptions",
"organizations_url": "https://api.github.com/users/niuzaisheng/orgs",
"repos_url": "https://api.github.com/users/niuzaisheng/repos",
"events_url": "https://api.github.com/users/niuzaisheng/events{/privacy}",
"received_events_url": "https://api.github.com/users/niuzaisheng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@niuzaisheng The function starting with '_' is a hint that this is an internal function and is not for external use. Is there any particular reason you are using this and not any alternatives?",
"Looking into the function `_maybe_log_save_evaluate` should ideally not contain line 1225 `self._globalstep_last_logged = self.state.global_step`. I can open a PR and rectify this but not sure if this is a necessary change. \r\n@sgugger Please comment.",
"Because my training ended after the first epoch and raised `ZeroDivisionError` exception, I looked inside the source code. I think this is the reason why my training cannot go on.",
"@niuzaisheng Can you provide sample code and the full stacktrace?",
"Sorry, I can't give out all my training script. But this problem appeared coincidentally. If `should_log ` just right at the end of an epoch, the func `_maybe_log_save_evaluate` will be called twice continuously.\r\n\r\n<img width=\"1280\" alt=\"截屏2021-03-23 下午7 30 29\" src=\"https://user-images.githubusercontent.com/29062892/112139872-429dca00-8c0e-11eb-82f5-d6c20b65bd0e.png\">\r\n\r\nhere is my stacktrace:\r\n```\r\n100%|█████████▉| 18028/18030 [5:21:43<00:01, 1.49it/s]\r\n100%|█████████▉| 18029/18030 [5:21:43<00:00, 1.52it/s]\r\n100%|██████████| 18030/18030 [5:21:44<00:00, 1.62it/s]{'loss': 1.3667, 'learning_rate': 0.0, 'epoch': 10.0, 'step': 18030}\r\nTraceback (most recent call last):\r\n File \"/home/XXXX/anaconda3/envs/allennlp/lib/python3.7/runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"/home/XXXX/anaconda3/envs/allennlp/lib/python3.7/runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"/home/XXXX/XXXX/XXXX/run_train.py\", line 297, in <module>\r\n main()\r\n File \"/home/XXXX/XXXX/XXXX/run_train.py\", line 257, in main\r\n model_path=model_args.name_or_path if os.path.isdir(model_args.name_or_path) else None\r\n File \"/home/XXXX/anaconda3/envs/allennlp/lib/python3.7/site-packages/transformers/trainer.py\", line 989, in train\r\n self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)\r\n File \"/home/XXXX/anaconda3/envs/allennlp/lib/python3.7/site-packages/transformers/trainer.py\", line 1044, in _maybe_log_save_evaluate\r\n logs[\"loss\"] = round(tr_loss_scalar / (self.state.global_step - self._globalstep_last_logged), 4)\r\nZeroDivisionError: float division by zero\r\n\r\n100%|██████████| 18030/18030 [5:21:54<00:00, 1.07s/it]\r\n```\r\nMy `logging_steps` is set to 10 steps. And 18030 is just at the end of an epoch, Coincidentally.",
"So, at the first time in line 983 call `_maybe_log_save_evaluate()`, ` self._globalstep_last_logged ` will be set equal to `self.state.global_step` by line 1052.\r\nAt second time in line 989 call `_maybe_log_save_evaluate()` , `logs[\"loss\"] = round(tr_loss_scalar / (self.state.global_step - self._globalstep_last_logged), 4)` will raise ZeroDivisionError in line 1044.\r\n\r\nI can avoid this problem by modifying `logging_steps` to other numbers.",
"Got it. This should be rectified. A simple : \r\n`if (step + 1) != steps_in_epoch:\r\n self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)`\r\nin line 1138 should work?\r\n",
"If you want to avoid two consecutive calls `_maybe_log_save_evaluate`, we can do this, but will it affect `should_evaluate` and `should_save` in `_maybe_log_save_evaluate` ? If `evaluation_strategy` is set to be `epoch`, will it affect ?",
"As its name indicates `_maybe_log_save_evaluate` does not log at each epoch, it depends on the value of the `self.control.should_log` variable which won't always be `True`. Since your log strategy is either `\"steps\"` or `\"epoch\"` it won't run the line\r\n```\r\nlogs[\"loss\"] = round(tr_loss_scalar / (self.state.global_step - self._globalstep_last_logged), 4)\r\n```\r\ntwice in a row.\r\n\r\nTo debug further why you have a problem, we would need to know what training arguments you are using and how you launch your training, which you are not willing to provide.",
"I know why now. I have override the`trainer.log()` func, and didn't add `self.control = self.callback_handler.on_log(self.args, self.state, self.control, logs)` at the end of the func. So `self.control.should_log` didn't set to False after log action.\r\nBecause my code was upgraded from the previous transformers 3.X version, this piece of code was not updated.\r\n\r\nThanks for your help! "
] | 1,616 | 1,616 | 1,616 | NONE | null | ## Environment info
- `transformers` version: 4.3.3
- Platform: Linux-4.15.0-139-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.0
- PyTorch version (GPU?): 1.8.0+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Irrelevant
- Using distributed or parallel set-up in script?: no
### Who can help
Library:
- trainer: @sgugger
## Information
In `transformers/trainer.py`, if I use the function `trainer._maybe_log_save_evaluate() `continuously, `self.state.global_step - self._globalstep_last_logged` will be zero, so raise `ZeroDivisionError` exception in line 1044:
`logs["loss"] = round(tr_loss_scalar / (self.state.global_step - self._globalstep_last_logged), 4)`
This situation occurs when an epoch is finished but `_maybe_log_save_evaluate()` is called twice in line 983 and line 989 , waiting for next epoch.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10858/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10858/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10857 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10857/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10857/comments | https://api.github.com/repos/huggingface/transformers/issues/10857/events | https://github.com/huggingface/transformers/pull/10857 | 838,124,305 | MDExOlB1bGxSZXF1ZXN0NTk4MzU0MTA2 | 10,857 | Make convert_to_onnx runable as script again | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,616 | 1,616 | COLLABORATOR | null | # What does this PR do?
When rewroking the inits, `convert_graph_to_onnx.py` got its import replaced by relative imports which broke the fact it can be run as a script. This PR fixes that. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10857/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10857/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10857",
"html_url": "https://github.com/huggingface/transformers/pull/10857",
"diff_url": "https://github.com/huggingface/transformers/pull/10857.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10857.patch",
"merged_at": 1616465799000
} |
https://api.github.com/repos/huggingface/transformers/issues/10856 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10856/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10856/comments | https://api.github.com/repos/huggingface/transformers/issues/10856/events | https://github.com/huggingface/transformers/pull/10856 | 838,005,439 | MDExOlB1bGxSZXF1ZXN0NTk4MjU0NjY2 | 10,856 | Use DataCollatorForSeq2Seq in run_summarization in all cases | {
"login": "elsanns",
"id": 3648991,
"node_id": "MDQ6VXNlcjM2NDg5OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3648991?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elsanns",
"html_url": "https://github.com/elsanns",
"followers_url": "https://api.github.com/users/elsanns/followers",
"following_url": "https://api.github.com/users/elsanns/following{/other_user}",
"gists_url": "https://api.github.com/users/elsanns/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elsanns/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elsanns/subscriptions",
"organizations_url": "https://api.github.com/users/elsanns/orgs",
"repos_url": "https://api.github.com/users/elsanns/repos",
"events_url": "https://api.github.com/users/elsanns/events{/privacy}",
"received_events_url": "https://api.github.com/users/elsanns/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks a lot!"
] | 1,616 | 1,616 | 1,616 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #10791
This PR uses an instance of DataCollatorForSeq2Seq as a data collator regardless of the value of pad_to_max_length.
It fixes the problem of the script breaking with the two parameters set:
- label_smoothing_factor
- pad_to_max_length
Removes unnecessary `default_data_collator` import.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
Discussion: #10791
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger Now ran make quality ;), thanks!
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10856/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10856/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10856",
"html_url": "https://github.com/huggingface/transformers/pull/10856",
"diff_url": "https://github.com/huggingface/transformers/pull/10856.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10856.patch",
"merged_at": 1616439939000
} |
https://api.github.com/repos/huggingface/transformers/issues/10855 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10855/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10855/comments | https://api.github.com/repos/huggingface/transformers/issues/10855/events | https://github.com/huggingface/transformers/issues/10855 | 837,920,358 | MDU6SXNzdWU4Mzc5MjAzNTg= | 10,855 | m2m_100 finetuning not working (KeyError: none) | {
"login": "sergej-d",
"id": 75784026,
"node_id": "MDQ6VXNlcjc1Nzg0MDI2",
"avatar_url": "https://avatars.githubusercontent.com/u/75784026?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sergej-d",
"html_url": "https://github.com/sergej-d",
"followers_url": "https://api.github.com/users/sergej-d/followers",
"following_url": "https://api.github.com/users/sergej-d/following{/other_user}",
"gists_url": "https://api.github.com/users/sergej-d/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sergej-d/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sergej-d/subscriptions",
"organizations_url": "https://api.github.com/users/sergej-d/orgs",
"repos_url": "https://api.github.com/users/sergej-d/repos",
"events_url": "https://api.github.com/users/sergej-d/events{/privacy}",
"received_events_url": "https://api.github.com/users/sergej-d/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @sergej-d \r\n\r\nThe `run_translation.py` script now supports fine-tuning `M2M100` (see #11170), for this model you should now also pass the `--forced_bos_token` argument which is usually similar to the the `--target_lang` ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,616 | 1,621 | 1,621 | NONE | null | - `transformers` version: 4.5.0.dev0
- Python version: 3.8
- PyTorch version (GPU?): 1.7.1+cu110
- Using GPU in script?: Yes, RTX 3090
- Using distributed or parallel set-up in script?: No
I am trying to finetune m2m:
python3 run_translation.py \
--model_name_or_path=facebook/m2m100_418M \
--do_train \
--do_eval \
--source_lang de \
--target_lang en \
--fp16=True \
--num_train_epochs 1 \
--evaluation_strategy epoch \
--dataset_name wmt15 \
--dataset_config_name de-en \
--output_dir /home/s/m2m_output/DE-EN \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate
And I'm getting this error:
All model checkpoint weights were used when initializing M2M100ForConditionalGeneration.
All the weights of M2M100ForConditionalGeneration were initialized from the model checkpoint at facebook/m2m100_418M.
If your task is similar to the task the model of the checkpoint was trained on, you can already use M2M100ForConditionalGeneration for predictions without further training.
Traceback (most recent call last):
File "run_translation.py", line 562, in <module>
main()
File "run_translation.py", line 401, in main
train_dataset = train_dataset.map(
File "/home/s/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1120, in map
update_data = does_function_return_dict(test_inputs, test_indices)
File "/home/s/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1091, in does_function_return_dict
function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
File "run_translation.py", line 382, in preprocess_function
with tokenizer.as_target_tokenizer():
File "/opt/conda/lib/python3.8/contextlib.py", line 113, in __enter__
return next(self.gen)
File "/home/s/.local/lib/python3.8/site-packages/transformers/models/m2m_100/tokenization_m2m_100.py", line 299, in as_target_tokenizer
self.set_tgt_lang_special_tokens(self.tgt_lang)
File "/home/s/.local/lib/python3.8/site-packages/transformers/models/m2m_100/tokenization_m2m_100.py", line 312, in set_tgt_lang_special_tokens
lang_token = self.get_lang_token(tgt_lang)
File "/home/s/.local/lib/python3.8/site-packages/transformers/models/m2m_100/tokenization_m2m_100.py", line 318, in get_lang_token
return self.lang_code_to_token[lang]
KeyError: None
@patrickvonplaten @patil-suraj Any ideas on how to fix this?
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10855/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10855/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10854 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10854/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10854/comments | https://api.github.com/repos/huggingface/transformers/issues/10854/events | https://github.com/huggingface/transformers/pull/10854 | 837,794,114 | MDExOlB1bGxSZXF1ZXN0NTk4MDc1NjQw | 10,854 | Run summarization always use data collator for seq2 seq | {
"login": "elsanns",
"id": 3648991,
"node_id": "MDQ6VXNlcjM2NDg5OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3648991?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elsanns",
"html_url": "https://github.com/elsanns",
"followers_url": "https://api.github.com/users/elsanns/followers",
"following_url": "https://api.github.com/users/elsanns/following{/other_user}",
"gists_url": "https://api.github.com/users/elsanns/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elsanns/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elsanns/subscriptions",
"organizations_url": "https://api.github.com/users/elsanns/orgs",
"repos_url": "https://api.github.com/users/elsanns/repos",
"events_url": "https://api.github.com/users/elsanns/events{/privacy}",
"received_events_url": "https://api.github.com/users/elsanns/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,616 | 1,616 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #10791
This PR uses an instance of DataCollatorForSeq2Seq as a data collator regardless of the value of `pad_to_max_length`.
It fixes the problem of the script breaking with the two parameters set:
- label_smoothing_factor
- pad_to_max_length
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
Discussion: https://github.com/huggingface/transformers/issues/10791
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10854/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10854/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10854",
"html_url": "https://github.com/huggingface/transformers/pull/10854",
"diff_url": "https://github.com/huggingface/transformers/pull/10854.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10854.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/10853 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10853/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10853/comments | https://api.github.com/repos/huggingface/transformers/issues/10853/events | https://github.com/huggingface/transformers/issues/10853 | 837,692,755 | MDU6SXNzdWU4Mzc2OTI3NTU= | 10,853 | Error building extension 'fused_adam' | {
"login": "saichandrapandraju",
"id": 41769919,
"node_id": "MDQ6VXNlcjQxNzY5OTE5",
"avatar_url": "https://avatars.githubusercontent.com/u/41769919?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saichandrapandraju",
"html_url": "https://github.com/saichandrapandraju",
"followers_url": "https://api.github.com/users/saichandrapandraju/followers",
"following_url": "https://api.github.com/users/saichandrapandraju/following{/other_user}",
"gists_url": "https://api.github.com/users/saichandrapandraju/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saichandrapandraju/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saichandrapandraju/subscriptions",
"organizations_url": "https://api.github.com/users/saichandrapandraju/orgs",
"repos_url": "https://api.github.com/users/saichandrapandraju/repos",
"events_url": "https://api.github.com/users/saichandrapandraju/events{/privacy}",
"received_events_url": "https://api.github.com/users/saichandrapandraju/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2659267025,
"node_id": "MDU6TGFiZWwyNjU5MjY3MDI1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed",
"name": "DeepSpeed",
"color": "4D34F7",
"default": false,
"description": ""
}
] | closed | false | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] | [
"Could you paste the whole stacktrace you're having? Also it seems you're running on a colab, would it be possible to share that colab so we can take a look? Thank you!\r\nPinging @stas00 ",
"1. As @LysandreJik suggested always report the full backtrace\r\n2. deepspeed requires a pytorch matching cuda version installed and configured - please refer to:\r\nhttps://huggingface.co/transformers/main_classes/trainer.html#installation-notes\r\n\r\n see the notebook I created on how to make it work on colab: https://github.com/stas00/porting/blob/master/transformers/deepspeed/DeepSpeed_on_colab_CLI.ipynb\r\n\r\n3. In general deepspeed building errors belong to https://github.com/microsoft/DeepSpeed/issues as HF only integrates it\r\n",
"Thanks @LysandreJik and @stas00 . \r\n\r\nI upgraded to `torch-1.8.0+cu101` and still getting the error. I think issue is with DeepSpeed itself. So raised an issue in their repo and [here](https://github.com/microsoft/DeepSpeed/issues/885) it is.\r\n\r\nBelow is the stacktrace - \r\n```\r\n[2021-03-23 07:03:49,374] [INFO] [logging.py:60:log_dist] [Rank 0] DeepSpeed info: version=0.3.13, git-hash=unknown, git-branch=unknown\r\n[2021-03-23 07:03:49,407] [INFO] [engine.py:77:_initialize_parameter_parallel_groups] data_parallel_size: 1, parameter_parallel_size: 1\r\nUsing /home/jovyan/.cache/torch_extensions as PyTorch extensions root...\r\nCreating extension directory /home/jovyan/.cache/torch_extensions/fused_adam...\r\nDetected CUDA files, patching ldflags\r\nEmitting ninja build file /home/jovyan/.cache/torch_extensions/fused_adam/build.ninja...\r\nBuilding extension module fused_adam...\r\nAllowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)\r\n---------------------------------------------------------------------------\r\nCalledProcessError Traceback (most recent call last)\r\n~/.local/lib/python3.6/site-packages/torch/utils/cpp_extension.py in _run_ninja_build(build_directory, verbose, error_prefix)\r\n 1672 check=True,\r\n-> 1673 env=env)\r\n 1674 except subprocess.CalledProcessError as e:\r\n\r\n/usr/lib/python3.6/subprocess.py in run(input, timeout, check, *popenargs, **kwargs)\r\n 437 raise CalledProcessError(retcode, process.args,\r\n--> 438 output=stdout, stderr=stderr)\r\n 439 return CompletedProcess(process.args, retcode, stdout, stderr)\r\n\r\nCalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nRuntimeError Traceback (most recent call last)\r\n<ipython-input-24-3435b262f1ae> in <module>\r\n----> 1 trainer.train()\r\n\r\n~/.local/lib/python3.6/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, **kwargs)\r\n 901 delay_optimizer_creation = self.sharded_ddp is not None and self.sharded_ddp != ShardedDDPOption.SIMPLE\r\n 902 if self.args.deepspeed:\r\n--> 903 model, optimizer, lr_scheduler = init_deepspeed(self, num_training_steps=max_steps)\r\n 904 self.model = model.module\r\n 905 self.model_wrapped = model # will get further wrapped in DDP\r\n\r\n~/.local/lib/python3.6/site-packages/transformers/integrations.py in init_deepspeed(trainer, num_training_steps)\r\n 416 model=model,\r\n 417 model_parameters=model_parameters,\r\n--> 418 config_params=config,\r\n 419 )\r\n 420 \r\n\r\n~/.local/lib/python3.6/site-packages/deepspeed/__init__.py in initialize(args, model, optimizer, model_parameters, training_data, lr_scheduler, mpu, dist_init_required, collate_fn, config_params)\r\n 123 dist_init_required=dist_init_required,\r\n 124 collate_fn=collate_fn,\r\n--> 125 config_params=config_params)\r\n 126 else:\r\n 127 assert mpu is None, \"mpu must be None with pipeline parallelism\"\r\n\r\n~/.local/lib/python3.6/site-packages/deepspeed/runtime/engine.py in __init__(self, args, model, optimizer, model_parameters, training_data, lr_scheduler, mpu, dist_init_required, collate_fn, config_params, dont_change_device)\r\n 181 self.lr_scheduler = None\r\n 182 if model_parameters or optimizer:\r\n--> 183 self._configure_optimizer(optimizer, model_parameters)\r\n 184 self._configure_lr_scheduler(lr_scheduler)\r\n 185 self._report_progress(0)\r\n\r\n~/.local/lib/python3.6/site-packages/deepspeed/runtime/engine.py in _configure_optimizer(self, client_optimizer, model_parameters)\r\n 596 logger.info('Using client Optimizer as basic optimizer')\r\n 597 else:\r\n--> 598 basic_optimizer = self._configure_basic_optimizer(model_parameters)\r\n 599 if self.global_rank == 0:\r\n 600 logger.info(\r\n\r\n~/.local/lib/python3.6/site-packages/deepspeed/runtime/engine.py in _configure_basic_optimizer(self, model_parameters)\r\n 670 optimizer = FusedAdam(model_parameters,\r\n 671 **optimizer_parameters,\r\n--> 672 adam_w_mode=effective_adam_w_mode)\r\n 673 \r\n 674 elif self.optimizer_name() == LAMB_OPTIMIZER:\r\n\r\n~/.local/lib/python3.6/site-packages/deepspeed/ops/adam/fused_adam.py in __init__(self, params, lr, bias_correction, betas, eps, adam_w_mode, weight_decay, amsgrad, set_grad_none)\r\n 70 self.set_grad_none = set_grad_none\r\n 71 \r\n---> 72 fused_adam_cuda = FusedAdamBuilder().load()\r\n 73 # Skip buffer\r\n 74 self._dummy_overflow_buf = torch.cuda.IntTensor([0])\r\n\r\n~/.local/lib/python3.6/site-packages/deepspeed/ops/op_builder/builder.py in load(self, verbose)\r\n 213 return importlib.import_module(self.absolute_name())\r\n 214 else:\r\n--> 215 return self.jit_load(verbose)\r\n 216 \r\n 217 def jit_load(self, verbose=True):\r\n\r\n~/.local/lib/python3.6/site-packages/deepspeed/ops/op_builder/builder.py in jit_load(self, verbose)\r\n 250 extra_cuda_cflags=self.nvcc_args(),\r\n 251 extra_ldflags=self.extra_ldflags(),\r\n--> 252 verbose=verbose)\r\n 253 build_duration = time.time() - start_build\r\n 254 if verbose:\r\n\r\n~/.local/lib/python3.6/site-packages/torch/utils/cpp_extension.py in load(name, sources, extra_cflags, extra_cuda_cflags, extra_ldflags, extra_include_paths, build_directory, verbose, with_cuda, is_python_module, is_standalone, keep_intermediates)\r\n 1089 is_python_module,\r\n 1090 is_standalone,\r\n-> 1091 keep_intermediates=keep_intermediates)\r\n 1092 \r\n 1093 \r\n\r\n~/.local/lib/python3.6/site-packages/torch/utils/cpp_extension.py in _jit_compile(name, sources, extra_cflags, extra_cuda_cflags, extra_ldflags, extra_include_paths, build_directory, verbose, with_cuda, is_python_module, is_standalone, keep_intermediates)\r\n 1300 verbose=verbose,\r\n 1301 with_cuda=with_cuda,\r\n-> 1302 is_standalone=is_standalone)\r\n 1303 finally:\r\n 1304 baton.release()\r\n\r\n~/.local/lib/python3.6/site-packages/torch/utils/cpp_extension.py in _write_ninja_file_and_build_library(name, sources, extra_cflags, extra_cuda_cflags, extra_ldflags, extra_include_paths, build_directory, verbose, with_cuda, is_standalone)\r\n 1405 build_directory,\r\n 1406 verbose,\r\n-> 1407 error_prefix=f\"Error building extension '{name}'\")\r\n 1408 \r\n 1409 \r\n\r\n~/.local/lib/python3.6/site-packages/torch/utils/cpp_extension.py in _run_ninja_build(build_directory, verbose, error_prefix)\r\n 1681 if hasattr(error, 'output') and error.output: # type: ignore\r\n 1682 message += f\": {error.output.decode()}\" # type: ignore\r\n-> 1683 raise RuntimeError(message) from e\r\n 1684 \r\n 1685 \r\n\r\nRuntimeError: Error building extension 'fused_adam'\r\n```",
"Resolved here: https://github.com/microsoft/DeepSpeed/issues/885 - simply very low resources colab instance - I updated \r\nhttps://github.com/stas00/porting/blob/master/transformers/deepspeed/DeepSpeed_on_colab_CLI.ipynb to give more instructions/guidelines."
] | 1,616 | 1,616 | 1,616 | NONE | null | Hi,
I recently updated `transformers` to `4.4.2` for `DebertaV2` and while training DebertaV2 with DeepSpeed, got an error regarding `deepspeed` version. So I upgraded to latest deepspeed (0.3.13) and started training and getting this error -
**RuntimeError: Error building extension 'fused_adam'**
Here is the env info -

`transformers - 4.4.2`
Also tried with torch==1.8.0+cu101 and getting same error.
I was able to train with deepspeed using `transformers-4.3.2` and `deepspeed-0.3.10`. Plz suggest how to proceed further..
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10853/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10853/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10852 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10852/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10852/comments | https://api.github.com/repos/huggingface/transformers/issues/10852/events | https://github.com/huggingface/transformers/issues/10852 | 837,656,414 | MDU6SXNzdWU4Mzc2NTY0MTQ= | 10,852 | Longformer training : CUDA error: device-side assert triggered | {
"login": "manchandasahil",
"id": 32937046,
"node_id": "MDQ6VXNlcjMyOTM3MDQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/32937046?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manchandasahil",
"html_url": "https://github.com/manchandasahil",
"followers_url": "https://api.github.com/users/manchandasahil/followers",
"following_url": "https://api.github.com/users/manchandasahil/following{/other_user}",
"gists_url": "https://api.github.com/users/manchandasahil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manchandasahil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manchandasahil/subscriptions",
"organizations_url": "https://api.github.com/users/manchandasahil/orgs",
"repos_url": "https://api.github.com/users/manchandasahil/repos",
"events_url": "https://api.github.com/users/manchandasahil/events{/privacy}",
"received_events_url": "https://api.github.com/users/manchandasahil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Seems like my issue. Maybe can help: https://github.com/huggingface/transformers/issues/10832",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"How to fix it?I come up with this issue too.",
"Also\r\n"
] | 1,616 | 1,686 | 1,619 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version: 3.7
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: sharedddp (Fairscale)
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
Library:
- trainer: @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): Longformer
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ x ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ x ] my own task or dataset: (give details below)
## To reproduce
When i use the same configuration to train model type bert it works but this does not work for longformer.
Steps to reproduce the behavior:
/opt/conda/bin/python -m torch.distributed.launch \
--nnodes=$WORLD_SIZE \
--node_rank=$RANK \
--master_addr=$MASTER_ADDR \
--master_port=$MASTER_PORT \
--nproc_per_node=1 $SCRIPT \
--output_dir=$OUT_DIR \
--logging_dir=$OUT_DIR \
--tokenizer_name=$TOKENIZER \
--model_type=longformer --do_train --do_eval \
--cache_dir=$CACHE_DIR \
--overwrite_cache \
--validation_file=$EVAL_DATA \
--overwrite_output_dir \
--train_file=$TRAIN_DATA_FOLDER \
--dataset_name=$DATASET_NAME \
--line_by_line \
--learning_rate=${INIT_LR} \
--save_steps=${SAVE_STEPS} \
--max_seq_length=${BLOCK_SIZE} \
--gradient_accumulation_steps=${GRAD_ACCUM_STEPS} \
--fp16 \
--num_train_epochs=$EPOCHS \
--per_device_train_batch_size=$BATCH_SIZE_PER_GPU \
--local_rank=$LOCAL_RANK \
--train_dataset_info_path=$TRAIN_DATASET_INFO \
--test_dataset_info_path=$TEST_DATASET_INFO \
--sharded_ddp \
Traceback (most recent call last):
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 661, in <module>
main()
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 465, in main
train_result = trainer.train(resume_from_checkpoint=model_path)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1003, in train
tr_loss += self.training_step(model, inputs)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1443, in training_step
loss = self.compute_loss(model, inputs)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1477, in compute_loss
outputs = model(**inputs)
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 726, in _call_impl
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/fairscale/nn/data_parallel/sharded_ddp.py", line 218, in forward
return self.module(*inputs, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 726, in _call_impl
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/transformers/models/longformer/modeling_longformer.py", line 1765, in forward
return_dict=return_dict,
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 726, in _call_impl
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/transformers/models/longformer/modeling_longformer.py", line 1669, in forward
return_dict=return_dict,
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 726, in _call_impl
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/transformers/models/longformer/modeling_longformer.py", line 1245, in forward
Traceback (most recent call last):
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 661, in <module>
Traceback (most recent call last):
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 661, in <module>
is_global_attn = is_index_global_attn.flatten().any().item()
RuntimeError: CUDA error: device-side assert triggered
main()
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 465, in main
main()
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 465, in main
train_result = trainer.train(resume_from_checkpoint=model_path)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1003, in train
train_result = trainer.train(resume_from_checkpoint=model_path)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1003, in train
tr_loss += self.training_step(model, inputs)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1443, in training_step
tr_loss += self.training_step(model, inputs)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1443, in training_step
loss = self.compute_loss(model, inputs)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1477, in compute_loss
loss = self.compute_loss(model, inputs)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1477, in compute_loss
outputs = model(**inputs)
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 726, in _call_impl
outputs = model(**inputs)
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 726, in _call_impl
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/fairscale/nn/data_parallel/sharded_ddp.py", line 218, in forward
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/fairscale/nn/data_parallel/sharded_ddp.py", line 218, in forward
return self.module(*inputs, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 726, in _call_impl
return self.module(*inputs, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 726, in _call_impl
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/transformers/models/longformer/modeling_longformer.py", line 1765, in forward
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/transformers/models/longformer/modeling_longformer.py", line 1765, in forward
Traceback (most recent call last):
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 661, in <module>
return_dict=return_dict,
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 726, in _call_impl
return_dict=return_dict,
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 726, in _call_impl
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/transformers/models/longformer/modeling_longformer.py", line 1669, in forward
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/transformers/models/longformer/modeling_longformer.py", line 1669, in forward
Traceback (most recent call last):
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 661, in <module>
return_dict=return_dict,
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 726, in _call_impl
return_dict=return_dict,
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 726, in _call_impl
main()
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 465, in main
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/transformers/models/longformer/modeling_longformer.py", line 1245, in forward
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/transformers/models/longformer/modeling_longformer.py", line 1245, in forward
is_global_attn = is_index_global_attn.flatten().any().item()
RuntimeError: CUDA error: device-side assert triggered
is_global_attn = is_index_global_attn.flatten().any().item()
RuntimeError: CUDA error: device-side assert triggered
train_result = trainer.train(resume_from_checkpoint=model_path)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1003, in train
main()
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 465, in main
tr_loss += self.training_step(model, inputs)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1443, in training_step
loss = self.compute_loss(model, inputs)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1477, in compute_loss
train_result = trainer.train(resume_from_checkpoint=model_path)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1003, in train
outputs = model(**inputs)
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 726, in _call_impl
tr_loss += self.training_step(model, inputs)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1443, in training_step
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/fairscale/nn/data_parallel/sharded_ddp.py", line 218, in forward
return self.module(*inputs, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 726, in _call_impl
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/transformers/models/longformer/modeling_longformer.py", line 1765, in forward
loss = self.compute_loss(model, inputs)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1477, in compute_loss
return_dict=return_dict,
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 726, in _call_impl
outputs = model(**inputs)
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 726, in _call_impl
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/transformers/models/longformer/modeling_longformer.py", line 1669, in forward
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/fairscale/nn/data_parallel/sharded_ddp.py", line 218, in forward
return self.module(*inputs, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 726, in _call_impl
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/transformers/models/longformer/modeling_longformer.py", line 1765, in forward
return_dict=return_dict,
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 726, in _call_impl
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/transformers/models/longformer/modeling_longformer.py", line 1245, in forward
return_dict=return_dict,
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 726, in _call_impl
is_global_attn = is_index_global_attn.flatten().any().item()
RuntimeError: CUDA error: device-side assert triggered
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/transformers/models/longformer/modeling_longformer.py", line 1669, in forward
return_dict=return_dict,
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 726, in _call_impl
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/transformers/models/longformer/modeling_longformer.py", line 1245, in forward
is_global_attn = is_index_global_attn.flatten().any().item()
RuntimeError: CUDA error: device-side assert triggered
terminate called after throwing an instance of 'c10::Error'
what(): CUDA error: device-side assert triggered
Exception raised from create_event_internal at ../c10/cuda/CUDACachingAllocator.cpp:687 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x6b (0x7fc78c43d99b in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10.so)
frame #1: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0xc10 (0x7fc78c680280 in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10_cuda.so)
frame #2: c10::TensorImpl::release_resources() + 0x4d (0x7fc78c425dfd in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10.so)
frame #3: <unknown function> + 0x5414e2 (0x7fc7c549d4e2 in /opt/conda/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #4: <unknown function> + 0x19aaae (0x5603f8975aae in /opt/conda/bin/python)
frame #5: <unknown function> + 0xf2868 (0x5603f88cd868 in /opt/conda/bin/python)
frame #6: <unknown function> + 0x1f0d91 (0x5603f89cbd91 in /opt/conda/bin/python)
frame #7: <unknown function> + 0xf270d (0x5603f88cd70d in /opt/conda/bin/python)
frame #8: <unknown function> + 0x19aa90 (0x5603f8975a90 in /opt/conda/bin/python)
frame #9: <unknown function> + 0xf2868 (0x5603f88cd868 in /opt/conda/bin/python)
frame #10: <unknown function> + 0x1f0d91 (0x5603f89cbd91 in /opt/conda/bin/python)
frame #11: <unknown function> + 0xf2828 (0x5603f88cd828 in /opt/conda/bin/python)
frame #12: <unknown function> + 0x19aa90 (0x5603f8975a90 in /opt/conda/bin/python)
frame #13: <unknown function> + 0xf2868 (0x5603f88cd868 in /opt/conda/bin/python)
frame #14: <unknown function> + 0x1f0d91 (0x5603f89cbd91 in /opt/conda/bin/python)
frame #15: <unknown function> + 0x1688cb (0x5603f89438cb in /opt/conda/bin/python)
frame #16: _PyGC_CollectNoFail + 0x2a (0x5603f89cb79a in /opt/conda/bin/python)
frame #17: PyImport_Cleanup + 0x278 (0x5603f897ffa8 in /opt/conda/bin/python)
frame #18: Py_FinalizeEx + 0x61 (0x5603f89ea961 in /opt/conda/bin/python)
frame #19: Py_Main + 0x35e (0x5603f89f4cae in /opt/conda/bin/python)
frame #20: main + 0xee (0x5603f88bef2e in /opt/conda/bin/python)
frame #21: __libc_start_main + 0xe7 (0x7fc7f2cf3b97 in /lib/x86_64-linux-gnu/libc.so.6)
frame #22: <unknown function> + 0x1c327f (0x5603f899e27f in /opt/conda/bin/python)
terminate called after throwing an instance of 'c10::Error'
what(): CUDA error: device-side assert triggered
Exception raised from create_event_internal at ../c10/cuda/CUDACachingAllocator.cpp:687 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x6b (0x7fa371cb999b in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10.so)
frame #1: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0xc10 (0x7fa371efc280 in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10_cuda.so)
frame #2: c10::TensorImpl::release_resources() + 0x4d (0x7fa371ca1dfd in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10.so)
frame #3: <unknown function> + 0x5414e2 (0x7fa3aad194e2 in /opt/conda/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #4: <unknown function> + 0x19aaae (0x5559699ffaae in /opt/conda/bin/python)
frame #5: <unknown function> + 0xf2868 (0x555969957868 in /opt/conda/bin/python)
frame #6: <unknown function> + 0x1f0d91 (0x555969a55d91 in /opt/conda/bin/python)
frame #7: <unknown function> + 0xf270d (0x55596995770d in /opt/conda/bin/python)
frame #8: <unknown function> + 0x19aa90 (0x5559699ffa90 in /opt/conda/bin/python)
frame #9: <unknown function> + 0xf2868 (0x555969957868 in /opt/conda/bin/python)
frame #10: <unknown function> + 0x1f0d91 (0x555969a55d91 in /opt/conda/bin/python)
frame #11: <unknown function> + 0xf2828 (0x555969957828 in /opt/conda/bin/python)
frame #12: <unknown function> + 0x19aa90 (0x5559699ffa90 in /opt/conda/bin/python)
frame #13: <unknown function> + 0xf2868 (0x555969957868 in /opt/conda/bin/python)
frame #14: <unknown function> + 0x1f0d91 (0x555969a55d91 in /opt/conda/bin/python)
frame #15: <unknown function> + 0x1688cb (0x5559699cd8cb in /opt/conda/bin/python)
frame #16: _PyGC_CollectNoFail + 0x2a (0x555969a5579a in /opt/conda/bin/python)
frame #17: PyImport_Cleanup + 0x278 (0x555969a09fa8 in /opt/conda/bin/python)
frame #18: Py_FinalizeEx + 0x61 (0x555969a74961 in /opt/conda/bin/python)
frame #19: Py_Main + 0x35e (0x555969a7ecae in /opt/conda/bin/python)
frame #20: main + 0xee (0x555969948f2e in /opt/conda/bin/python)
frame #21: __libc_start_main + 0xe7 (0x7fa3d856fb97 in /lib/x86_64-linux-gnu/libc.so.6)
frame #22: <unknown function> + 0x1c327f (0x555969a2827f in /opt/conda/bin/python)
terminate called after throwing an instance of 'c10::Error'
what(): CUDA error: device-side assert triggered
Exception raised from create_event_internal at ../c10/cuda/CUDACachingAllocator.cpp:687 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x6b (0x7f121fb5299b in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10.so)
frame #1: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0xc10 (0x7f121fd95280 in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10_cuda.so)
frame #2: c10::TensorImpl::release_resources() + 0x4d (0x7f121fb3adfd in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10.so)
frame #3: <unknown function> + 0x5414e2 (0x7f1258bb24e2 in /opt/conda/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #4: <unknown function> + 0x19aaae (0x5601c5024aae in /opt/conda/bin/python)
frame #5: <unknown function> + 0xf2868 (0x5601c4f7c868 in /opt/conda/bin/python)
frame #6: <unknown function> + 0x1f0d91 (0x5601c507ad91 in /opt/conda/bin/python)
frame #7: <unknown function> + 0xf270d (0x5601c4f7c70d in /opt/conda/bin/python)
frame #8: <unknown function> + 0x19aa90 (0x5601c5024a90 in /opt/conda/bin/python)
frame #9: <unknown function> + 0xf2868 (0x5601c4f7c868 in /opt/conda/bin/python)
frame #10: <unknown function> + 0x1f0d91 (0x5601c507ad91 in /opt/conda/bin/python)
frame #11: <unknown function> + 0xf2828 (0x5601c4f7c828 in /opt/conda/bin/python)
frame #12: <unknown function> + 0x19aa90 (0x5601c5024a90 in /opt/conda/bin/python)
frame #13: <unknown function> + 0xf2868 (0x5601c4f7c868 in /opt/conda/bin/python)
frame #14: <unknown function> + 0x1f0d91 (0x5601c507ad91 in /opt/conda/bin/python)
frame #15: <unknown function> + 0x1688cb (0x5601c4ff28cb in /opt/conda/bin/python)
frame #16: _PyGC_CollectNoFail + 0x2a (0x5601c507a79a in /opt/conda/bin/python)
frame #17: PyImport_Cleanup + 0x278 (0x5601c502efa8 in /opt/conda/bin/python)
frame #18: Py_FinalizeEx + 0x61 (0x5601c5099961 in /opt/conda/bin/python)
frame #19: Py_Main + 0x35e (0x5601c50a3cae in /opt/conda/bin/python)
frame #20: main + 0xee (0x5601c4f6df2e in /opt/conda/bin/python)
frame #21: __libc_start_main + 0xe7 (0x7f1286408b97 in /lib/x86_64-linux-gnu/libc.so.6)
frame #22: <unknown function> + 0x1c327f (0x5601c504d27f in /opt/conda/bin/python)
terminate called after throwing an instance of 'c10::Error'
what(): CUDA error: device-side assert triggered
Exception raised from create_event_internal at ../c10/cuda/CUDACachingAllocator.cpp:687 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x6b (0x7fe94f54799b in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10.so)
frame #1: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0xc10 (0x7fe94f78a280 in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10_cuda.so)
frame #2: c10::TensorImpl::release_resources() + 0x4d (0x7fe94f52fdfd in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10.so)
frame #3: <unknown function> + 0x5414e2 (0x7fe9885a74e2 in /opt/conda/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #4: <unknown function> + 0x19aaae (0x55ab4542baae in /opt/conda/bin/python)
frame #5: <unknown function> + 0xf2868 (0x55ab45383868 in /opt/conda/bin/python)
frame #6: <unknown function> + 0x1f0d91 (0x55ab45481d91 in /opt/conda/bin/python)
frame #7: <unknown function> + 0xf270d (0x55ab4538370d in /opt/conda/bin/python)
frame #8: <unknown function> + 0x19aa90 (0x55ab4542ba90 in /opt/conda/bin/python)
frame #9: <unknown function> + 0xf2868 (0x55ab45383868 in /opt/conda/bin/python)
frame #10: <unknown function> + 0x1f0d91 (0x55ab45481d91 in /opt/conda/bin/python)
frame #11: <unknown function> + 0xf2828 (0x55ab45383828 in /opt/conda/bin/python)
frame #12: <unknown function> + 0x19aa90 (0x55ab4542ba90 in /opt/conda/bin/python)
frame #13: <unknown function> + 0xf2868 (0x55ab45383868 in /opt/conda/bin/python)
frame #14: <unknown function> + 0x1f0d91 (0x55ab45481d91 in /opt/conda/bin/python)
frame #15: <unknown function> + 0x1688cb (0x55ab453f98cb in /opt/conda/bin/python)
frame #16: _PyGC_CollectNoFail + 0x2a (0x55ab4548179a in /opt/conda/bin/python)
frame #17: PyImport_Cleanup + 0x278 (0x55ab45435fa8 in /opt/conda/bin/python)
frame #18: Py_FinalizeEx + 0x61 (0x55ab454a0961 in /opt/conda/bin/python)
frame #19: Py_Main + 0x35e (0x55ab454aacae in /opt/conda/bin/python)
frame #20: main + 0xee (0x55ab45374f2e in /opt/conda/bin/python)
frame #21: __libc_start_main + 0xe7 (0x7fe9b5dfdb97 in /lib/x86_64-linux-gnu/libc.so.6)
frame #22: <unknown function> + 0x1c327f (0x55ab4545427f in /opt/conda/bin/python)
terminate called after throwing an instance of 'c10::Error'
what(): CUDA error: device-side assert triggered
Exception raised from create_event_internal at ../c10/cuda/CUDACachingAllocator.cpp:687 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x6b (0x7fce50e8399b in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10.so)
frame #1: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0xc10 (0x7fce510c6280 in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10_cuda.so)
frame #2: c10::TensorImpl::release_resources() + 0x4d (0x7fce50e6bdfd in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10.so)
frame #3: <unknown function> + 0x5414e2 (0x7fce89ee34e2 in /opt/conda/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #4: <unknown function> + 0x19aaae (0x55919a5ffaae in /opt/conda/bin/python)
frame #5: <unknown function> + 0xf2868 (0x55919a557868 in /opt/conda/bin/python)
frame #6: <unknown function> + 0x1f0d91 (0x55919a655d91 in /opt/conda/bin/python)
frame #7: <unknown function> + 0xf270d (0x55919a55770d in /opt/conda/bin/python)
frame #8: <unknown function> + 0x19aa90 (0x55919a5ffa90 in /opt/conda/bin/python)
frame #9: <unknown function> + 0xf2868 (0x55919a557868 in /opt/conda/bin/python)
frame #10: <unknown function> + 0x1f0d91 (0x55919a655d91 in /opt/conda/bin/python)
frame #11: <unknown function> + 0xf2828 (0x55919a557828 in /opt/conda/bin/python)
frame #12: <unknown function> + 0x19aa90 (0x55919a5ffa90 in /opt/conda/bin/python)
frame #13: <unknown function> + 0xf2868 (0x55919a557868 in /opt/conda/bin/python)
frame #14: <unknown function> + 0x1f0d91 (0x55919a655d91 in /opt/conda/bin/python)
frame #15: <unknown function> + 0x1688cb (0x55919a5cd8cb in /opt/conda/bin/python)
frame #16: _PyGC_CollectNoFail + 0x2a (0x55919a65579a in /opt/conda/bin/python)
frame #17: PyImport_Cleanup + 0x278 (0x55919a609fa8 in /opt/conda/bin/python)
frame #18: Py_FinalizeEx + 0x61 (0x55919a674961 in /opt/conda/bin/python)
frame #19: Py_Main + 0x35e (0x55919a67ecae in /opt/conda/bin/python)
frame #20: main + 0xee (0x55919a548f2e in /opt/conda/bin/python)
frame #21: __libc_start_main + 0xe7 (0x7fceb7739b97 in /lib/x86_64-linux-gnu/libc.so.6)
frame #22: <unknown function> + 0x1c327f (0x55919a62827f in /opt/conda/bin/python)
terminate called after throwing an instance of 'c10::Error'
what(): CUDA error: device-side assert triggered
Exception raised from create_event_internal at ../c10/cuda/CUDACachingAllocator.cpp:687 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x6b (0x7f01ad8c799b in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10.so)
frame #1: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0xc10 (0x7f01adb0a280 in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10_cuda.so)
frame #2: c10::TensorImpl::release_resources() + 0x4d (0x7f01ad8afdfd in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10.so)
frame #3: <unknown function> + 0x5414e2 (0x7f01e69274e2 in /opt/conda/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #4: <unknown function> + 0x19aaae (0x55c9bc565aae in /opt/conda/bin/python)
frame #5: <unknown function> + 0xf2868 (0x55c9bc4bd868 in /opt/conda/bin/python)
frame #6: <unknown function> + 0x1f0d91 (0x55c9bc5bbd91 in /opt/conda/bin/python)
frame #7: <unknown function> + 0xf270d (0x55c9bc4bd70d in /opt/conda/bin/python)
frame #8: <unknown function> + 0x19aa90 (0x55c9bc565a90 in /opt/conda/bin/python)
frame #9: <unknown function> + 0xf2868 (0x55c9bc4bd868 in /opt/conda/bin/python)
frame #10: <unknown function> + 0x1f0d91 (0x55c9bc5bbd91 in /opt/conda/bin/python)
frame #11: <unknown function> + 0xf2828 (0x55c9bc4bd828 in /opt/conda/bin/python)
frame #12: <unknown function> + 0x19aa90 (0x55c9bc565a90 in /opt/conda/bin/python)
frame #13: <unknown function> + 0xf2868 (0x55c9bc4bd868 in /opt/conda/bin/python)
frame #14: <unknown function> + 0x1f0d91 (0x55c9bc5bbd91 in /opt/conda/bin/python)
frame #15: <unknown function> + 0x1688cb (0x55c9bc5338cb in /opt/conda/bin/python)
frame #16: _PyGC_CollectNoFail + 0x2a (0x55c9bc5bb79a in /opt/conda/bin/python)
frame #17: PyImport_Cleanup + 0x278 (0x55c9bc56ffa8 in /opt/conda/bin/python)
frame #18: Py_FinalizeEx + 0x61 (0x55c9bc5da961 in /opt/conda/bin/python)
frame #19: Py_Main + 0x35e (0x55c9bc5e4cae in /opt/conda/bin/python)
frame #20: main + 0xee (0x55c9bc4aef2e in /opt/conda/bin/python)
frame #21: __libc_start_main + 0xe7 (0x7f021417db97 in /lib/x86_64-linux-gnu/libc.so.6)
frame #22: <unknown function> + 0x1c327f (0x55c9bc58e27f in /opt/conda/bin/python)
terminate called after throwing an instance of 'c10::Error'
what(): CUDA error: device-side assert triggered
Exception raised from create_event_internal at ../c10/cuda/CUDACachingAllocator.cpp:687 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x6b (0x7ff569f1599b in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10.so)
frame #1: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0xc10 (0x7ff56a158280 in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10_cuda.so)
frame #2: c10::TensorImpl::release_resources() + 0x4d (0x7ff569efddfd in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10.so)
frame #3: <unknown function> + 0x5414e2 (0x7ff5a2f754e2 in /opt/conda/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #4: <unknown function> + 0x19aaae (0x562bbdb46aae in /opt/conda/bin/python)
frame #5: <unknown function> + 0xf2868 (0x562bbda9e868 in /opt/conda/bin/python)
frame #6: <unknown function> + 0x1f0d91 (0x562bbdb9cd91 in /opt/conda/bin/python)
frame #7: <unknown function> + 0xf270d (0x562bbda9e70d in /opt/conda/bin/python)
frame #8: <unknown function> + 0x19aa90 (0x562bbdb46a90 in /opt/conda/bin/python)
frame #9: <unknown function> + 0xf2868 (0x562bbda9e868 in /opt/conda/bin/python)
frame #10: <unknown function> + 0x1f0d91 (0x562bbdb9cd91 in /opt/conda/bin/python)
frame #11: <unknown function> + 0xf2828 (0x562bbda9e828 in /opt/conda/bin/python)
frame #12: <unknown function> + 0x19aa90 (0x562bbdb46a90 in /opt/conda/bin/python)
frame #13: <unknown function> + 0xf2868 (0x562bbda9e868 in /opt/conda/bin/python)
frame #14: <unknown function> + 0x1f0d91 (0x562bbdb9cd91 in /opt/conda/bin/python)
frame #15: <unknown function> + 0x1688cb (0x562bbdb148cb in /opt/conda/bin/python)
frame #16: _PyGC_CollectNoFail + 0x2a (0x562bbdb9c79a in /opt/conda/bin/python)
frame #17: PyImport_Cleanup + 0x278 (0x562bbdb50fa8 in /opt/conda/bin/python)
frame #18: Py_FinalizeEx + 0x61 (0x562bbdbbb961 in /opt/conda/bin/python)
frame #19: Py_Main + 0x35e (0x562bbdbc5cae in /opt/conda/bin/python)
frame #20: main + 0xee (0x562bbda8ff2e in /opt/conda/bin/python)
frame #21: __libc_start_main + 0xe7 (0x7ff5d07cbb97 in /lib/x86_64-linux-gnu/libc.so.6)
frame #22: <unknown function> + 0x1c327f (0x562bbdb6f27f in /opt/conda/bin/python)
terminate called after throwing an instance of 'c10::Error'
what(): CUDA error: device-side assert triggered
Exception raised from create_event_internal at ../c10/cuda/CUDACachingAllocator.cpp:687 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x6b (0x7f9808d0299b in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10.so)
frame #1: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0xc10 (0x7f9808f45280 in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10_cuda.so)
frame #2: c10::TensorImpl::release_resources() + 0x4d (0x7f9808ceadfd in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10.so)
frame #3: <unknown function> + 0x5414e2 (0x7f9841d624e2 in /opt/conda/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #4: <unknown function> + 0x19aaae (0x55ba33d58aae in /opt/conda/bin/python)
frame #5: <unknown function> + 0xf2868 (0x55ba33cb0868 in /opt/conda/bin/python)
frame #6: <unknown function> + 0x1f0d91 (0x55ba33daed91 in /opt/conda/bin/python)
frame #7: <unknown function> + 0xf270d (0x55ba33cb070d in /opt/conda/bin/python)
frame #8: <unknown function> + 0x19aa90 (0x55ba33d58a90 in /opt/conda/bin/python)
frame #9: <unknown function> + 0xf2868 (0x55ba33cb0868 in /opt/conda/bin/python)
frame #10: <unknown function> + 0x1f0d91 (0x55ba33daed91 in /opt/conda/bin/python)
frame #11: <unknown function> + 0xf2828 (0x55ba33cb0828 in /opt/conda/bin/python)
frame #12: <unknown function> + 0x19aa90 (0x55ba33d58a90 in /opt/conda/bin/python)
frame #13: <unknown function> + 0xf2868 (0x55ba33cb0868 in /opt/conda/bin/python)
frame #14: <unknown function> + 0x1f0d91 (0x55ba33daed91 in /opt/conda/bin/python)
frame #15: <unknown function> + 0x1688cb (0x55ba33d268cb in /opt/conda/bin/python)
frame #16: _PyGC_CollectNoFail + 0x2a (0x55ba33dae79a in /opt/conda/bin/python)
frame #17: PyImport_Cleanup + 0x278 (0x55ba33d62fa8 in /opt/conda/bin/python)
frame #18: Py_FinalizeEx + 0x61 (0x55ba33dcd961 in /opt/conda/bin/python)
frame #19: Py_Main + 0x35e (0x55ba33dd7cae in /opt/conda/bin/python)
frame #20: main + 0xee (0x55ba33ca1f2e in /opt/conda/bin/python)
frame #21: __libc_start_main + 0xe7 (0x7f986f5b8b97 in /lib/x86_64-linux-gnu/libc.so.6)
frame #22: <unknown function> + 0x1c327f (0x55ba33d8127f in /opt/conda/bin/python)
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10852/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10852/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10851 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10851/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10851/comments | https://api.github.com/repos/huggingface/transformers/issues/10851/events | https://github.com/huggingface/transformers/issues/10851 | 837,633,317 | MDU6SXNzdWU4Mzc2MzMzMTc= | 10,851 | Small inconsistency in tokenization_utils for special tokens retrieval | {
"login": "aitor-garcia-p",
"id": 2588285,
"node_id": "MDQ6VXNlcjI1ODgyODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2588285?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aitor-garcia-p",
"html_url": "https://github.com/aitor-garcia-p",
"followers_url": "https://api.github.com/users/aitor-garcia-p/followers",
"following_url": "https://api.github.com/users/aitor-garcia-p/following{/other_user}",
"gists_url": "https://api.github.com/users/aitor-garcia-p/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aitor-garcia-p/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aitor-garcia-p/subscriptions",
"organizations_url": "https://api.github.com/users/aitor-garcia-p/orgs",
"repos_url": "https://api.github.com/users/aitor-garcia-p/repos",
"events_url": "https://api.github.com/users/aitor-garcia-p/events{/privacy}",
"received_events_url": "https://api.github.com/users/aitor-garcia-p/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,616 | 1,619 | 1,619 | NONE | null | Hi there,
This is just a minor issue I have spotted.
When a special token (cls_token, sep_token, etc.) is accessed in an instantiated Tokenizer, this is the piece of code executed to retrieve the property:
```python
@property
def eos_token(self) -> str:
"""
:obj:`str`: End of sentence token. Log an error if used while not having been set.
"""
if self._eos_token is None and self.verbose:
logger.error("Using eos_token, but it is not set yet.")
return None
return str(self._eos_token)
```
The None check is tied to the verbose flag, so when the verbose flag is set to False the condition is not triggered, and even if the special token is None, a 'None' literal string is returned (the last line).
It happens the same for all the special tokens, leading to unexpected behavior if you are expecting an actual None outside the tokenizer. I think that the "verbose" flag and the "is None" should be checked in separated conditionals.
The mentioned code can be located at:
https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils_base.py#L949
Thank you very much.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10851/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10851/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10850 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10850/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10850/comments | https://api.github.com/repos/huggingface/transformers/issues/10850/events | https://github.com/huggingface/transformers/issues/10850 | 837,548,277 | MDU6SXNzdWU4Mzc1NDgyNzc= | 10,850 | How to train encoder decoder for explicit negation generation | {
"login": "thak123",
"id": 3891859,
"node_id": "MDQ6VXNlcjM4OTE4NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3891859?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thak123",
"html_url": "https://github.com/thak123",
"followers_url": "https://api.github.com/users/thak123/followers",
"following_url": "https://api.github.com/users/thak123/following{/other_user}",
"gists_url": "https://api.github.com/users/thak123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thak123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thak123/subscriptions",
"organizations_url": "https://api.github.com/users/thak123/orgs",
"repos_url": "https://api.github.com/users/thak123/repos",
"events_url": "https://api.github.com/users/thak123/events{/privacy}",
"received_events_url": "https://api.github.com/users/thak123/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!"
] | 1,616 | 1,617 | 1,617 | NONE | null | Hi
I am trying to generate negations out of non-negated sentences.
I used a simple “I have tea” => “I don’t have tea” formatted dataset for training an XLMR encoder-decoder model using the example provided in the collab.
```
# set special tokens
roberta_shared.config.decoder_start_token_id = tokenizer.bos_token_id
roberta_shared.config.eos_token_id = tokenizer.eos_token_id
# sensible parameters for beam search
# set decoding params
roberta_shared.config.max_length = 64
roberta_shared.config.early_stopping = True
roberta_shared.config.no_repeat_ngram_size = 3
roberta_shared.config.length_penalty = 2.0
roberta_shared.config.num_beams = 4
roberta_shared.config.vocab_size = roberta_shared.config.encoder.vocab_size
```
But the test set produces different tokens than the source. How can I preserve the source tokens when generating the output.
[“I have it.”, “I love tea”, “I can have coffee.”] =>
[‘I have no it.’, “I’ll not love.” “I can’t have food.”]
Where the model modifies the words in the sentence.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10850/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10850/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10849 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10849/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10849/comments | https://api.github.com/repos/huggingface/transformers/issues/10849/events | https://github.com/huggingface/transformers/pull/10849 | 837,512,926 | MDExOlB1bGxSZXF1ZXN0NTk3ODM3MzMx | 10,849 | Fix: typo in FINE_TUNE_XLSR_WAV2VEC2.md | {
"login": "qqpann",
"id": 17402261,
"node_id": "MDQ6VXNlcjE3NDAyMjYx",
"avatar_url": "https://avatars.githubusercontent.com/u/17402261?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qqpann",
"html_url": "https://github.com/qqpann",
"followers_url": "https://api.github.com/users/qqpann/followers",
"following_url": "https://api.github.com/users/qqpann/following{/other_user}",
"gists_url": "https://api.github.com/users/qqpann/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qqpann/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qqpann/subscriptions",
"organizations_url": "https://api.github.com/users/qqpann/orgs",
"repos_url": "https://api.github.com/users/qqpann/repos",
"events_url": "https://api.github.com/users/qqpann/events{/privacy}",
"received_events_url": "https://api.github.com/users/qqpann/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,616 | 1,616 | CONTRIBUTOR | null | Fix typo.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
This is a simple typo fix. Could you review it @sgugger ?
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10849/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10849/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10849",
"html_url": "https://github.com/huggingface/transformers/pull/10849",
"diff_url": "https://github.com/huggingface/transformers/pull/10849.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10849.patch",
"merged_at": 1616414339000
} |
https://api.github.com/repos/huggingface/transformers/issues/10848 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10848/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10848/comments | https://api.github.com/repos/huggingface/transformers/issues/10848/events | https://github.com/huggingface/transformers/pull/10848 | 837,467,742 | MDExOlB1bGxSZXF1ZXN0NTk3Nzk5MTM4 | 10,848 | GPT Neo | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@sdtblck @leogao2 this is the Neo PR, reviews/comments appreciated !",
"I tried running this with the 2.7B checkpoint and got \r\n```\r\n(base) stellabiderman@Stellas-MacBook-Pro research % python transformers/src/transformers/models/gpt_neo/convert_gpt_neo_mesh_tf_to_pytorch.py --tf_checkpoint_path GPT3_2.7B/checkpoint --config_file GPT3_2-7B/config.json --pytorch_dump_path GPT3_2-7B\r\nBuilding PyTorch model from configuration: GPTNeoConfig {\r\n \"activation_function\": \"gelu\",\r\n \"ada_epsilon1\": \"1e-30\",\r\n \"ada_epsilon2\": 0.001,\r\n \"attention_types\": [\r\n [\r\n [\r\n \"global\",\r\n \"local\"\r\n ],\r\n 16\r\n ]\r\n ],\r\n \"attn_dropout\": 0,\r\n \"attn_layers\": [\r\n \"global\",\r\n \"local\",\r\n \"global\",\r\n \"local\",\r\n \"global\",\r\n \"local\",\r\n \"global\",\r\n \"local\",\r\n \"global\",\r\n \"local\",\r\n \"global\",\r\n \"local\",\r\n \"global\",\r\n \"local\",\r\n \"global\",\r\n \"local\",\r\n \"global\",\r\n \"local\",\r\n \"global\",\r\n \"local\",\r\n \"global\",\r\n \"local\",\r\n \"global\",\r\n \"local\"\r\n ],\r\n \"attn_pdrop\": 0.1,\r\n \"beta1\": 0.9,\r\n \"beta2\": 0.95,\r\n \"bos_token_id\": 50256,\r\n \"datasets\": [\r\n [\r\n \"pile\",\r\n null,\r\n null,\r\n null\r\n ]\r\n ],\r\n \"embd_pdrop\": 0.1,\r\n \"embed_dropout\": 0,\r\n \"eos_id\": 50256,\r\n \"eos_token_id\": 50256,\r\n \"epsilon\": 1e-08,\r\n \"eval_batch_size\": 128,\r\n \"eval_steps\": 10,\r\n \"gradient_checkpointing\": false,\r\n \"gradient_clipping\": 1.0,\r\n \"initializer_range\": 0.02,\r\n \"iterations\": 500,\r\n \"layer_norm_epsilon\": 1e-05,\r\n \"layout\": \"batch:x,embd:y\",\r\n \"lr\": 0.00016,\r\n \"lr_decay\": \"cosine\",\r\n \"lr_decay_end\": 300000,\r\n \"mesh_shape\": \"x:64,y:4\",\r\n \"model_path\": \"gs://neo-d/models/GPT3_2-7B\",\r\n \"model_type\": \"gpt_neo\",\r\n \"n_ctx\": 2048,\r\n \"n_embd\": 2560,\r\n \"n_head\": 20,\r\n \"n_inner\": null,\r\n \"n_layer\": 32,\r\n \"n_positions\": 2048,\r\n \"n_vocab\": 50257,\r\n \"opt_name\": \"adam\",\r\n \"padding_id\": 50257,\r\n \"predict_batch_size\": 1,\r\n \"predict_steps\": 0,\r\n \"recompute_grad\": true,\r\n \"res_dropout\": 0,\r\n \"resid_pdrop\": 0.1,\r\n \"scale_by_depth\": true,\r\n \"scale_by_in\": false,\r\n \"summary_activation\": null,\r\n \"summary_first_dropout\": 0.1,\r\n \"summary_proj_to_labels\": true,\r\n \"summary_type\": \"cls_index\",\r\n \"summary_use_proj\": true,\r\n \"tokens_per_mb_per_replica\": 4096,\r\n \"train_batch_size\": 512,\r\n \"train_steps\": 400000,\r\n \"transformers_version\": \"4.5.0.dev0\",\r\n \"use_cache\": false,\r\n \"vocab_size\": 50257,\r\n \"warmup_steps\": 3000,\r\n \"weight_decay\": 0,\r\n \"window_size\": 256\r\n}\r\n\r\nTraceback (most recent call last):\r\n File \"transformers/src/transformers/models/gpt_neo/convert_gpt_neo_mesh_tf_to_pytorch.py\", line 59, in <module>\r\n convert_tf_checkpoint_to_pytorch(args.tf_checkpoint_path, args.config_file, args.pytorch_dump_path)\r\n File \"transformers/src/transformers/models/gpt_neo/convert_gpt_neo_mesh_tf_to_pytorch.py\", line 31, in convert_tf_checkpoint_to_pytorch\r\n model = GPTNeoForCausalLM(config)\r\n File \"/Users/stellabiderman/Documents/Research/transformers/src/transformers/models/gpt_neo/modeling_gpt_neo.py\", line 778, in __init__\r\n self.transformer = GPTNeoModel(config)\r\n File \"/Users/stellabiderman/Documents/Research/transformers/src/transformers/models/gpt_neo/modeling_gpt_neo.py\", line 597, in __init__\r\n self.h = nn.ModuleList([Block(config, layer_id=i) for i in range(config.n_layer)])\r\n File \"/Users/stellabiderman/Documents/Research/transformers/src/transformers/models/gpt_neo/modeling_gpt_neo.py\", line 597, in <listcomp>\r\n self.h = nn.ModuleList([Block(config, layer_id=i) for i in range(config.n_layer)])\r\n File \"/Users/stellabiderman/Documents/Research/transformers/src/transformers/models/gpt_neo/modeling_gpt_neo.py\", line 434, in __init__\r\n self.attn = GPTNeoAttention(config, layer_id)\r\n File \"/Users/stellabiderman/Documents/Research/transformers/src/transformers/models/gpt_neo/modeling_gpt_neo.py\", line 381, in __init__\r\n self.attention_type = self.attn_layers[layer_id]\r\nIndexError: list index out of range\r\n```",
"Hi @StellaAthena ,\r\n2.7B models has 32 layers, so `attn_layers` should be \r\n\r\n\r\n```python\r\n['global',\r\n 'local',\r\n 'global',\r\n 'local',\r\n 'global',\r\n 'local',\r\n 'global',\r\n 'local',\r\n 'global',\r\n 'local',\r\n 'global',\r\n 'local',\r\n 'global',\r\n 'local',\r\n 'global',\r\n 'local',\r\n 'global',\r\n 'local',\r\n 'global',\r\n 'local',\r\n 'global',\r\n 'local',\r\n 'global',\r\n 'local',\r\n 'global',\r\n 'local',\r\n 'global',\r\n 'local',\r\n 'global',\r\n 'local',\r\n 'global',\r\n 'local']\r\n```\r\n\r\nI've converted these checkpoints and will push them to the hub in a couple of hours. I'll ping you once that's done, so you can directly download them.",
"I see! Is this a problem with my local config file, or is something up with the code on the repo? I downloaded my file directly from the-eye before running the conversion script, so if the local config file is wrong that’s a bit of a problem for us.",
"Hey @patil-suraj haven't had a chance to look over the whole PR yet, so i'm not sure how you load up the configuration, but I wonder why you even have separate fields for \"attention_types\" and \"attention_layers\" since they configure the same thing, and attention layers can be derived from attention types",
"Hi @sdtblck \r\n\r\n`attention_types` is not used by the config, it only uses `attention_layers`, but yeah `attention_layers` can be derived from \r\n`attention_types`.\r\n\r\nFor an example config file, see https://huggingface.co/valhalla/gpt_neo_xl_test/blob/main/config.json\r\n\r\nI've uploaded the 1.3B checkpoint under my namespace temporarily, here's a [colab](https://colab.research.google.com/drive/1EE2oMOXj2lAxPDS5KB3t7R5lWKTln0pk?usp=sharing) if you wanna give it a try.",
"> Hi @sdtblck\r\n> \r\n> `attention_types` is not used by the config, it only uses `attention_layers`, but yeah `attention_layers` can be derived from\r\n> `attention_types`.\r\n\r\nOur config file doesn't define `attention _layers`. It appears that you [hard-coded](https://github.com/patil-suraj/transformers/blob/b35d805b81516a1c44f32f98205709c3b95a6be8/src/transformers/models/gpt_neo/configuration_gpt_neo.py#L102) this specific attention pattern. I agree with @sdtblck that it would make much more sense to derive `attention_layers` from `attention_types`. I believe the correct place to do that would be [here](https://github.com/patil-suraj/transformers/blob/b35d805b81516a1c44f32f98205709c3b95a6be8/src/transformers/models/gpt_neo/convert_gpt_neo_mesh_tf_to_pytorch.py#L29).",
"Yes, you are right! I hardcoded it since we usually prefer to keep everything explicit but yeah I agree this would be a problem for your side. I will change it so that `attention_layers` will be derived from `attention_types`.\r\n\r\nAre there any other issues?",
"@StellaAthena @sdtblck \r\n\r\nThe 2.7B model is up! https://huggingface.co/valhalla/gpt_neo_2.7B/tree/main",
"I tried out the 2.7B model you posted @patil-suraj but it wouldn't run. I get the error\r\n\r\n```\r\nSome weights of the model checkpoint at valhalla/gpt_neo_2.7B were not used when initializing GPT2LMHeadModel:\r\n...\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\nTraceback (most recent call last):\r\n File \"main.py\", line 9, in <module>\r\n from lm_eval import models, tasks, evaluator, base\r\n File \"/home/mchorse/lm-evaluation-harness/lm_eval/models/__init__.py\", line 7, in <module>\r\n \"gpt-neo\": gpt2.GPT2LM(device=\"cuda\",pretrained=\"valhalla/gpt_neo_2.7B\"),\r\n File \"/home/mchorse/lm-evaluation-harness/lm_eval/models/gpt2.py\", line 14, in __init__\r\n self.gpt2 = transformers.GPT2LMHeadModel.from_pretrained(pretrained).to(self.device)\r\n File \"/home/mchorse/.local/lib/python3.8/site-packages/transformers/modeling_utils.py\", line 1181, in from_pretrained\r\n raise RuntimeError(\r\nRuntimeError: Error(s) in loading state_dict for GPT2LMHeadModel:\r\n...\r\n```\r\n\r\nLooking through the readout, I see\r\n```\r\nsize mismatch for transformer.h.0.mlp.c_fc.weight: copying a param with shape torch.Size([10240, 2560]) from checkpoint, the shape in current model is torch.Size([2560, 10240]).\r\nsize mismatch for transformer.h.0.mlp.c_proj.weight: copying a param with shape torch.Size([2560, 10240]) from checkpoint, the shape in current model is torch.Size([10240, 2560]).\r\nsize mismatch for transformer.h.1.mlp.c_fc.weight: copying a param with shape torch.Size([10240, 2560]) from checkpoint, the shape in current model is torch.Size([2560, 10240]).\r\nsize mismatch for transformer.h.1.mlp.c_proj.weight: copying a param with shape torch.Size([2560, 10240]) from checkpoint, the shape in current model is torch.Size([10240, 2560]).\r\nsize mismatch for transformer.h.2.mlp.c_fc.weight: copying a param with shape torch.Size([10240, 2560]) from checkpoint, the shape in current model is torch.Size([2560, 10240]).\r\nsize mismatch for transformer.h.2.mlp.c_proj.weight: copying a param with shape torch.Size([2560, 10240]) from checkpoint, the shape in current model is torch.Size([10240, 2560]).\r\n```\r\n\r\nI think that there's an unneeded transpose hanging out in the code.",
"It looks like you are using the `GPT2LMHeadModel` class. We've added a new class `GPTNeoForCasualLM` for `gpt-neo` , which should be used instead of `GPT2LMHeadModel`.\r\n\r\nCould you checkout this PR and try loading it using the `GPTNeoForCasualLM` class ?\r\n\r\nAnd yes, `GPT2` uses this [`Conv1D`](https://github.com/huggingface/transformers/blob/master/src/transformers/models/gpt2/modeling_gpt2.py#L255) layer which has transposed weights, hence the error.",
"> Was there no way to add some \"# Copied from\" statements to ensure that the two models do not diverge?\r\n\r\nI have made some changes to the code mostly related to naming and passing `config` to `Block` and `Attention` instead of individual arguments, so can't really use `# Copied from`",
"An update from our end: We got the 2.7B model up and running in our evaluation harness! Unfortunately the run revealed that the harness is bugged...\r\n\r\nRunning it by hand gives reasonable-looking results, but I don't know how much I should trust myself to judge that.",
"(to clarify: the bugs in eval harness were introduced by a series of pretty aggressive optimizations i implemented just a few hours earlier today)",
"I tried finetuning the model with deepspeed and gradient checkpointing, but unlike with GPT2, the loss explodes. I used the default run_clm.py from the examples folder, but added one line to activate gradient checkpointing. Here is then the command i ran:\r\n\r\n```\r\ndeepspeed --num_gpus=1 run_clm.py \\\r\n--deepspeed ds_config_gptneo.json \\\r\n--model_name_or_path valhalla/gpt_neo_2.7B \\\r\n--train_file train.csv \\\r\n--validation_file validation.csv \\\r\n--do_train \\\r\n--do_eval \\\r\n--fp16 \\\r\n--overwrite_cache \\\r\n--evaluation_strategy=\"steps\" \\\r\n--output_dir finetuned \\\r\n--num_train_epochs 2 \\\r\n--eval_steps 15 \\\r\n--gradient_accumulation_steps 2 \\\r\n--per_device_train_batch_size 4 \\\r\n--use_fast_tokenizer False \\\r\n--learning_rate 1e-05 \\\r\n--adam_beta1 0.9 \\\r\n--adam_beta2 0.95 \\\r\n--weight_decay 0.1 \\\r\n--warmup_steps 50\r\n```\r\n\r\nHere is my ds_config_gptneo.json (is almost the default, except for a lower min_loss_scaling, otherwise i got overflows) (optimizer and warmup hps are overwritten by the flags above):\r\n```\r\n\r\n{\r\n \"fp16\": {\r\n \"enabled\": true,\r\n \"loss_scale\": 0,\r\n \"loss_scale_window\": 1000,\r\n \"initial_scale_power\": -3,\r\n \"hysteresis\": 2,\r\n \"min_loss_scale\": -1000\r\n },\r\n \"zero_optimization\": {\r\n \"stage\": 2,\r\n \"allgather_partitions\": true,\r\n \"allgather_bucket_size\": 5e7,\r\n \"overlap_comm\": true,\r\n \"reduce_scatter\": true,\r\n \"reduce_bucket_size\": 5e7,\r\n \"contiguous_gradients\": true,\r\n \"cpu_offload\": true\r\n },\r\n \"optimizer\": {\r\n \"type\": \"AdamW\",\r\n \"params\": {\r\n \"lr\": 0.00001,\r\n \"betas\": [\r\n 0.9,\r\n 0.95\r\n ],\r\n \"eps\": 1e-8,\r\n \"weight_decay\": 0.1\r\n }\r\n },\r\n \"scheduler\": {\r\n \"type\": \"WarmupLR\",\r\n \"params\": {\r\n \"warmup_min_lr\": 0,\r\n \"warmup_max_lr\": 0.00001,\r\n \"warmup_num_steps\": 50\r\n }\r\n },\r\n \"steps_per_print\": 1000,\r\n \"wall_clock_breakdown\": false\r\n}\r\n```\r\n\r\nI tried the exact hyperparameters as well that EleutherAi used, with long warmup phases, but it is still the same. If the learning rate is low enough the loss doesn't change and once its big enough, it immediately explodes. I also did an hyperparameter sweep with the same result. Could this be an issue with the model implementation, as finetuning with EleutherAi's implementation in Mesh Tensorflow on Colab seems to work?\r\n\r\nHere are the exact steps that i did (on the bottom half part): https://github.com/Xirider/finetune-gpt2xl",
"hi @Xirider let me take a look, but meanwhile could you try without `fp16` ?",
"Hi, yes, i will try it",
"Hm, setting no fp16 doesn't work with Zero:\r\nAssertionError: DeepSpeedConfig: ZeRO is only supported if fp16 is enabled.\r\nAnd without deepspeed's zero i don't think i have enough gpu memory.",
"> It looks like you are using the `GPT2LMHeadModel` class. We've added a new class `GPTNeoForCasualLM` for `gpt-neo` , which should be used instead of `GPT2LMHeadModel`.\r\n> \r\n> Could you checkout this PR and try loading it using the `GPTNeoForCasualLM` class ?\r\n> \r\n> And yes, `GPT2` uses this [`Conv1D`](https://github.com/huggingface/transformers/blob/master/src/transformers/models/gpt2/modeling_gpt2.py#L255) layer which has transposed weights, hence the error.\r\n\r\nI tried using GPTNeoForCausalLM to load the 2.7B model and encountered similar errors in loading state_dict:\r\n\r\n```\r\nSome weights of the model checkpoint at valhalla/gpt_neo_2.7B were not used when initializing GPTNeoForCausalLM: ['transformer.h.24.ln_1.weight', 'transformer.h.24.ln_1.bias', 'transformer.h.24.attn.attention.bias', 'transformer.h.24.attn.attention.masked_bias', 'transformer.h.24.attn.attention.k_proj.weight', ...]\r\n- This IS expected if you are initializing GPTNeoForCausalLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing GPTNeoForCausalLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\n\r\nTraceback (most recent call last):\r\n File \"<input>\", line 1, in <module>\r\n File \"/usr/local/lib/python3.8/dist-packages/transformers-4.5.0.dev0-py3.8.egg/transformers/modeling_utils.py\", line 1181, in from_pretrained\r\n raise RuntimeError(\r\nRuntimeError: Error(s) in loading state_dict for GPTNeoForCausalLM:\r\n\tsize mismatch for transformer.wte.weight: copying a param with shape torch.Size([50257, 2560]) from checkpoint, the shape in current model is torch.Size([50257, 2048]).\r\n\tsize mismatch for transformer.wpe.weight: copying a param with shape torch.Size([2048, 2560]) from checkpoint, the shape in current model is torch.Size([2048, 2048]).\r\n\tsize mismatch for transformer.h.0.ln_1.weight: copying a param with shape torch.Size([2560]) from checkpoint, the shape in current model is torch.Size([2048]).\r\n\tsize mismatch for transformer.h.0.ln_1.bias: copying a param with shape torch.Size([2560]) from checkpoint, the shape in current model is torch.Size([2048]).\r\n...\r\n```\r\n",
"Hi @esperie\r\nThis is a WIP PR, so things are supposed to break, please wait till it's merged to report issues. Thanks!\r\nAlso it's because I renamed a few config params and need to update the model on the hub",
"One thing I've caught testing the neo model is that if i try to add a padding token to the tokenizer after loading it from pretrained (i.e to predict batches instead of a single sequence at a time), then i get:\r\n\r\n`RuntimeError: CUDA error: device-side assert triggered`\r\n\r\nI guess because the tokenizer vocabulary is different to the way it was initialized. I'm not sure if this is a HF-wide problem (although I don't recall this being a problem with GPT2Tokenizer.from_pretrained('gpt2')) or specific to neo, but here is the code to reproduce the error:\r\n\r\n```python\r\nimport torch\r\nfrom transformers import GPTNeoForCausalLM, GPT2Tokenizer\r\nckpt_2b = \"EleutherAI/gpt_neo_2-7B\"\r\ntokenizer = GPT2Tokenizer.from_pretrained(ckpt_2b)\r\ntokenizer.add_special_tokens({'pad_token': '<|padding|>'})\r\nids = tokenizer(\"hello world\", return_tensors=\"pt\").input_ids.to(\"cuda\")\r\n```",
"maybe I'm just going insane, or doing something stupid, because swapping out ckpt_2b for 'gpt2' is giving the same error. We never had this problem training with gpt-neox. Can anyone reproduce, and if so, should I open up a new issue?",
"Hey @sdtblck! I think the issue here is because you're adding a new token to your tokenizer (so you're extending your vocab), but you're not resizing the token embedding matrix.\r\n\r\nWhen you're creating the GPT-2 tokenizer from your checkpoint, you should have a tokenizer size of 50257:\r\n ```py\r\nfrom transformers import GPTNeoForCausalLM, GPT2Tokenizer\r\nckpt_2b = \"EleutherAI/gpt_neo_2-7B\"\r\ntokenizer = GPT2Tokenizer.from_pretrained(ckpt_2b)\r\nprint(len(tokenizer))\r\n# 50257\r\n```\r\n\r\nThat's the same size as the model token embedding matrix:\r\n\r\n```py\r\nprint(model.get_input_embeddings())\r\n# Embedding(50257, 2560)\r\n```\r\n\r\nWhen adding a new token, you should also resize the token embedding matrix alongside it. Otherwise you'll get some index out of range issues, as you'll be trying to obtain the 50258th row of a matrix with 50257 rows. Please add the following line to your code, once you have added a token to your tokenizer and instantiated your model:\r\n```py\r\nmodel.resize_token_embeddings(len(tokenizer))\r\n```\r\n\r\nEverything should be working smoothly now :)",
"Hm, @LysandreJik so doing that does make the error to go away, but sampling with the model when I've added padding tokens seems to cause almost everything in the prediction to become padding. Let me know if i should take this somewhere else btw, don't want to clog up this PR if this issue doesn't relate to it at all.\r\n\r\n`predict` below is pretty much just a wrapper around model.generate()\r\n\r\n```python\r\nprompt = \"Q: What is the meaning of life? A:\"\r\n\r\ngen_text = predict(prompt)\r\nprint('-'*100)\r\nprint(gen_text)\r\n\r\ntokenizer.add_special_tokens({'pad_token': '<|padding|>'})\r\nmodel.resize_token_embeddings(len(tokenizer))\r\nmodel.half()\r\n\r\ngen_text = predict(prompt)\r\nprint('-'*100)\r\nprint(gen_text)\r\n```\r\nOutputs:\r\n\r\n```\r\nSetting `pad_token_id` to `eos_token_id`:50256 for open-end generation.\r\n\r\n----------------------------------------------------------------------------------------------------\r\nQ: What is the meaning of life? A: It is the sum total of the events and happenings which lead to the end of this human life. A person dies because of the event or occurrence which gives birth to his life. In other words, every time a person dies he brings a new life beginning from his own death. In short, if something happens in a human life, it will lead to a life, but if there is no event or occurrence, it will lead to death. Every life matters greatly - everyone has their own life. Life is a measure of happiness, a measure of fulfillment, and a measure of the value and the quality of a person. It is a reflection of everything that has led to a person's development; therefore, Column 1 of the book contains the questions, \"What is the meaning of life?\" and \"What is happiness?\" Column 2 contains the answers. The third column contains the answers taken from the column of questions raised by the readers.\r\n\r\nQ: What is the meaning of life? A: It is the sum total of the events and happenings which lead to the end of this human life. A person dies because of the event or occurrence which gives birth to his life. In other words, every time a person dies he brings a new life beginning from his own death. In short, if something happens in a human life, it will lead to a life, but if there is no event or occurrence, it will lead to death. Every life matters greatly - everyone has their\r\n----------------------------------------------------------------------------------------------------\r\nQ: What is the meaning of life? A: It<|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|> ... ```",
"Hi @sdtblck \r\n\r\nFor batch generation with GPT like models, the text should be padded to the left.\r\n\r\nthis is how batch generation works\r\n\r\n```python\r\nmodel.config.pad_token_id = tokenizer.pad_token_id\r\ntokenizer.padding_side = \"left\"\r\n\r\ninputs = tokenizer(sentences, return_tensors=\"pt\", padding=True)\r\noutputs = model.generate(\r\n input_ids=inputs[\"input_ids\"],\r\n attention_mask=inputs[\"attention_mask\"]\r\n)\r\n```",
"Also, the actual vocab size of the model is 50257 so token ids range from 0 to 50256. This `<|padding|>` padding token is not in the embedding matrix, so I doubt if generation will work as expected when using `<|padding|>` as pad token. Instead, this is what we can do, set the `eos_token` as pad token and set the padding side to `left`.\r\n\r\n```python\r\ntokenizer.pad_token_id = tokenizer.eos_token\r\nmodel.config.pad_token_id = tokenizer.eos_token_id\r\n\r\ntokenizer.padding_side = \"left\"\r\n\r\ninputs = tokenizer(sentences, return_tensors=\"pt\", padding=True)\r\ngen_tokens = model.generate(\r\n inputs[\"input_ids\"],\r\n attention_mask=inputs[\"attention_mask\"]\r\n)\r\n ```\r\n\r\nThis should work. Or feel free to open an issue if this is not working.",
"@StellaAthena \r\n\r\nThe `convert_gpt2_original_tf_checkpoint_to_pytorch.py` now works with the GPT-Neo config, it reads the neo config and initializess HF config from that. Should be now easy to convert the mesh-tf models to PT. ",
"> @StellaAthena\r\n> \r\n> The `convert_gpt2_original_tf_checkpoint_to_pytorch.py` now works with the GPT-Neo config, it reads the neo config and initializess HF config from that. Should be now easy to convert the mesh-tf models to PT.\r\n\r\nDo you by any chance have an example input/output with the conversion script? I was having trouble getting the new code to work with the default configs in the gpt-neo repo.",
"There are models listed on the eleutherai HuggingFace account that AFAIK we did not post. Are these the pretrained models @patil-suraj had been hosting?",
"I was referring to the pre-trained models posted here: https://the-eye.eu/public/AI/gptneo-release/"
] | 1,616 | 1,617 | 1,617 | MEMBER | null | # What does this PR do?
This PR adds the [GPT Neo model](https://github.com/EleutherAI/gpt-neo).
The model architecture is very similar to GPT2 except it local attention in alternate layers
- `LocalAttention` module implements the local attention. The implementation is not as clean as it should be and will be cleaned-up in follow-up PR.
- To enable caching (`use_cache`) the local attention layer caches the `hidden_states` instead of `past_key_value_states`.
Also right now when `use_cache` is enabled the current length can-not be greater than 1.
- The model uses the same tokenizer as GPT2 so does not need a new tokenizer class.
Example: usage
```python
import torch
from transformers import GPTNeoForCausalLM, AutoTokenizer
model = GPTNeoForCausalLM.from_pretrained("EleutherAI/gpt-neo-1.3B")
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-1.3B")
unicorns = "In a shocking finding, scientists discovered a herd of unicorns living in a remote, " \
"previously unexplored valley, in the Andes Mountains. Even more surprising to the " \
"researchers was the fact that the unicorns spoke perfect English."
input_ids = tokenizer(unicorns, return_tensors="pt").input_ids
# add the length of the prompt tokens to match with the mesh-tf generation
max_length = 400 + input_ids.shape[1]
temperature = .9
do_sample = True
# set seed to reproduce samples
torch.manual_seed(42)
gen_tokens = model.generate(
input_ids,
do_sample=do_sample,
min_length=max_length,
max_length=max_length,
temperature=temperature,
)
gen_text = tokenizer.batch_decode(gen_tokens)[0]
```
Future TODOs:
- clean-up the implementation of `LocalAttention` especially the creation of `attention_mask`.
- test fine-tuning.
- enable current length > 1 when `use_cache` is enabled.
- Add more robust and aggressive tests for the `LocalAttention` module.
- Add `TF` model. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10848/reactions",
"total_count": 26,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 9,
"rocket": 13,
"eyes": 4
} | https://api.github.com/repos/huggingface/transformers/issues/10848/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10848",
"html_url": "https://github.com/huggingface/transformers/pull/10848",
"diff_url": "https://github.com/huggingface/transformers/pull/10848.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10848.patch",
"merged_at": 1617111750000
} |
https://api.github.com/repos/huggingface/transformers/issues/10847 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10847/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10847/comments | https://api.github.com/repos/huggingface/transformers/issues/10847/events | https://github.com/huggingface/transformers/pull/10847 | 837,440,327 | MDExOlB1bGxSZXF1ZXN0NTk3Nzc1OTA2 | 10,847 | fix code quality issues | {
"login": "withshubh",
"id": 25361949,
"node_id": "MDQ6VXNlcjI1MzYxOTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/25361949?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/withshubh",
"html_url": "https://github.com/withshubh",
"followers_url": "https://api.github.com/users/withshubh/followers",
"following_url": "https://api.github.com/users/withshubh/following{/other_user}",
"gists_url": "https://api.github.com/users/withshubh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/withshubh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/withshubh/subscriptions",
"organizations_url": "https://api.github.com/users/withshubh/orgs",
"repos_url": "https://api.github.com/users/withshubh/repos",
"events_url": "https://api.github.com/users/withshubh/events{/privacy}",
"received_events_url": "https://api.github.com/users/withshubh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @LysandreJik :wave: Please review this PR",
"Hi @withshubh, as said before in https://github.com/huggingface/transformers/pull/8950#issuecomment-781222459, we would like to stay with our current tooling for now. Thank you."
] | 1,616 | 1,617 | 1,617 | NONE | null | ### Description
Hi :wave: I sent PR #8950 which has changes in a lot of files so sending this PR with fixes in few files so that it can be easily reviewed.
You can have a look at the various issues that were caught in the codebase [here](https://deepsource.io/gh/withshubh/transformers/issues/?category=recommended).
### Summary of changes
- Removed length check in favour of truthiness of the object
> Boosts minor performance, see the description [here](https://deepsource.io/gh/withshubh/transformers/issue/PYL-C1801/description/).
- Removed unnecessary comprehension
> boosts minor performance, see the description [here](https://deepsource.io/gh/withshubh/transformers/issue/PTC-W0016/description/).
- Removed unnecessary use of comprehension
> boosts minor performance, see the description [here](https://deepsource.io/gh/withshubh/transformers/issue/PTC-W0019/description/).
- Refactored the comparison involving `not`
> fixed [antipattern](https://deepsource.io/gh/withshubh/transformers/issue/PYL-C0113/description/)
- Removed unnecessary return statement
> removes [antipattern](https://deepsource.io/gh/withshubh/transformers/issue/PYL-R1711/description/)
- Iterated dictionary directly
> removes [antipattern](https://deepsource.io/gh/withshubh/transformers/issue/PYL-C0201/description/)
- Used literal syntax instead of function calls to create data structure
> boosts minor [performance](https://deepsource.io/gh/withshubh/transformers/issue/PYL-C1801/description/)
- Added .deepsource.toml
> config file to continuously analyze the repo for code quality issues | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10847/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10847/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10847",
"html_url": "https://github.com/huggingface/transformers/pull/10847",
"diff_url": "https://github.com/huggingface/transformers/pull/10847.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10847.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/10846 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10846/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10846/comments | https://api.github.com/repos/huggingface/transformers/issues/10846/events | https://github.com/huggingface/transformers/pull/10846 | 837,416,229 | MDExOlB1bGxSZXF1ZXN0NTk3NzU1Mzc4 | 10,846 | [Wav2Vec2] Small tab fix | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,616 | 1,616 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10846/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10846/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10846",
"html_url": "https://github.com/huggingface/transformers/pull/10846",
"diff_url": "https://github.com/huggingface/transformers/pull/10846.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10846.patch",
"merged_at": 1616398341000
} |
https://api.github.com/repos/huggingface/transformers/issues/10845 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10845/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10845/comments | https://api.github.com/repos/huggingface/transformers/issues/10845/events | https://github.com/huggingface/transformers/issues/10845 | 837,396,118 | MDU6SXNzdWU4MzczOTYxMTg= | 10,845 | Option to change loss function for fine tuning | {
"login": "frankhart2018",
"id": 38374913,
"node_id": "MDQ6VXNlcjM4Mzc0OTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/38374913?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/frankhart2018",
"html_url": "https://github.com/frankhart2018",
"followers_url": "https://api.github.com/users/frankhart2018/followers",
"following_url": "https://api.github.com/users/frankhart2018/following{/other_user}",
"gists_url": "https://api.github.com/users/frankhart2018/gists{/gist_id}",
"starred_url": "https://api.github.com/users/frankhart2018/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/frankhart2018/subscriptions",
"organizations_url": "https://api.github.com/users/frankhart2018/orgs",
"repos_url": "https://api.github.com/users/frankhart2018/repos",
"events_url": "https://api.github.com/users/frankhart2018/events{/privacy}",
"received_events_url": "https://api.github.com/users/frankhart2018/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You can change the loss function to anything you want. Here's an example:\r\n\r\n```\r\nfrom transformers import BertModel\r\nfrom transformers.modeling_outputs import SequenceClassifierOutput\r\nimport torch.nn as nn\r\n\r\nclass FancyBertModelWithCustomLossFunction(nn.Module):\r\n def __init__(self):\r\n super(FancyBertModelWithCustomLossFunction, self).__init__()\r\n self.bert = BertModel.from_pretrained(\"bert-base-uncased\")\r\n self.dropout = nn.Dropout(0.3)\r\n self.classifier = nn.Linear(768, 1)\r\n\r\n def forward(self, ids, mask, token_type_ids, labels=None):\r\n outputs = self.bert(ids, attention_mask=mask, token_type_ids=token_type_ids)\r\n\r\n output = self.dropout(outputs.pooler_output)\r\n logits = self.classifier(output)\r\n \r\n loss = None\r\n if labels is not None:\r\n if self.num_labels == 1:\r\n # We are doing regression\r\n loss_fct = MSELoss()\r\n loss = loss_fct(logits.view(-1), labels.view(-1))\r\n else:\r\n # you can define any loss function here yourself\r\n # see https://pytorch.org/docs/stable/nn.html#loss-functions for an overview\r\n loss_fct = nn.BinaryCrossEntropyLoss()\r\n # next, compute the loss based on logits + ground-truth labels\r\n loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))\r\n\r\n return SequenceClassifierOutput(\r\n loss=loss,\r\n logits=logits,\r\n hidden_states=outputs.hidden_states,\r\n attentions=outputs.attentions,\r\n )\r\n\r\n```",
"@NeilsRogge I am aware of this, what I was referring to is an option that can be passed directly to let's say `DistilBertForSequenceClassification` or any other model class without having to write a pytorch model like this. ",
"This has already been asked before, but we are not planning to do this.\r\n\r\nSee also [this comment](https://github.com/huggingface/transformers/issues/9625#issuecomment-762167788) in #9625",
"Oh ok, got it. Thanks @NeilsRogge. Closing this."
] | 1,616 | 1,616 | 1,616 | NONE | null | # 🚀 Feature request
## Motivation
I was working in a multi class text classification problem for which I was using `DistilBertForSequenceClassification` and I found out that there is no way for me to change the loss function from CrossEntropyLoss.
## Your contribution
I can submit a PR, if this feature request is approved.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10845/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10845/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10844 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10844/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10844/comments | https://api.github.com/repos/huggingface/transformers/issues/10844/events | https://github.com/huggingface/transformers/issues/10844 | 837,364,574 | MDU6SXNzdWU4MzczNjQ1NzQ= | 10,844 | Add GPT-Neo | {
"login": "aolko",
"id": 581458,
"node_id": "MDQ6VXNlcjU4MTQ1OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/581458?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aolko",
"html_url": "https://github.com/aolko",
"followers_url": "https://api.github.com/users/aolko/followers",
"following_url": "https://api.github.com/users/aolko/following{/other_user}",
"gists_url": "https://api.github.com/users/aolko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aolko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aolko/subscriptions",
"organizations_url": "https://api.github.com/users/aolko/orgs",
"repos_url": "https://api.github.com/users/aolko/repos",
"events_url": "https://api.github.com/users/aolko/events{/privacy}",
"received_events_url": "https://api.github.com/users/aolko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [] | 1,616 | 1,619 | 1,619 | NONE | null | # 🌟 New model addition
Please add GPT-Neo
## Model description
> GPT-Neo is the code name for a series of transformer-based language models loosely styled around the GPT architecture that Eleuther AI plans to train and open source. Eleuther AI's primary goal is to replicate a GPT-3 sized model and open source it to the public, for free.
<!-- Important information -->
## Open source status
* [x] the model implementation is available: [Repo](https://github.com/EleutherAI/gpt-neo)
* [x] the model weights are available: [Download](https://the-eye.eu/eleuther_staging/gptneo-release/) (1.3B & 2.7B)
* [x] who are the authors: @sdtblck, @leogao2, @lucidrains, @ConnorJL, @StellaAthena & [others](https://github.com/EleutherAI)
Somewhat related to #4658, #4679, especially [this](https://github.com/huggingface/transformers/issues/4658#issuecomment-754247106) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10844/reactions",
"total_count": 19,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 13,
"rocket": 0,
"eyes": 6
} | https://api.github.com/repos/huggingface/transformers/issues/10844/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10843 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10843/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10843/comments | https://api.github.com/repos/huggingface/transformers/issues/10843/events | https://github.com/huggingface/transformers/issues/10843 | 837,324,819 | MDU6SXNzdWU4MzczMjQ4MTk= | 10,843 | Is there a `DataCollator` cat mask n-gram words for LM? | {
"login": "wa008",
"id": 29834520,
"node_id": "MDQ6VXNlcjI5ODM0NTIw",
"avatar_url": "https://avatars.githubusercontent.com/u/29834520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wa008",
"html_url": "https://github.com/wa008",
"followers_url": "https://api.github.com/users/wa008/followers",
"following_url": "https://api.github.com/users/wa008/following{/other_user}",
"gists_url": "https://api.github.com/users/wa008/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wa008/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wa008/subscriptions",
"organizations_url": "https://api.github.com/users/wa008/orgs",
"repos_url": "https://api.github.com/users/wa008/repos",
"events_url": "https://api.github.com/users/wa008/events{/privacy}",
"received_events_url": "https://api.github.com/users/wa008/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1897896961,
"node_id": "MDU6TGFiZWwxODk3ODk2OTYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Migration",
"name": "Migration",
"color": "e99695",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,616 | 1,619 | 1,619 | NONE | null | # 📚 Migration
## Information
I want a mask n-gram words when pre-train bert model? but I can't find a DataCollator from the lib
https://github.com/huggingface/transformers/blob/master/src/transformers/data/data_collator.py
I want to build it by myself, but I don't know how to build my own DataCollator, who can give me a demo?
## Checklist
- [ ] I have read the migration guide in the readme.
([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers);
[pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers))
- [ ] I checked if a related official extension example runs on my machine.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10843/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10843/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10842 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10842/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10842/comments | https://api.github.com/repos/huggingface/transformers/issues/10842/events | https://github.com/huggingface/transformers/issues/10842 | 837,272,360 | MDU6SXNzdWU4MzcyNzIzNjA= | 10,842 | How to fine-tune RAG on MS-MARCO dataset? | {
"login": "tangxiangru",
"id": 22478336,
"node_id": "MDQ6VXNlcjIyNDc4MzM2",
"avatar_url": "https://avatars.githubusercontent.com/u/22478336?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tangxiangru",
"html_url": "https://github.com/tangxiangru",
"followers_url": "https://api.github.com/users/tangxiangru/followers",
"following_url": "https://api.github.com/users/tangxiangru/following{/other_user}",
"gists_url": "https://api.github.com/users/tangxiangru/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tangxiangru/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tangxiangru/subscriptions",
"organizations_url": "https://api.github.com/users/tangxiangru/orgs",
"repos_url": "https://api.github.com/users/tangxiangru/repos",
"events_url": "https://api.github.com/users/tangxiangru/events{/privacy}",
"received_events_url": "https://api.github.com/users/tangxiangru/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!"
] | 1,616 | 1,616 | 1,616 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10842/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10842/timeline | completed | null | null |
|
https://api.github.com/repos/huggingface/transformers/issues/10841 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10841/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10841/comments | https://api.github.com/repos/huggingface/transformers/issues/10841/events | https://github.com/huggingface/transformers/issues/10841 | 837,267,087 | MDU6SXNzdWU4MzcyNjcwODc= | 10,841 | issue of run_mlm.py | {
"login": "sataliulan",
"id": 6769310,
"node_id": "MDQ6VXNlcjY3NjkzMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6769310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sataliulan",
"html_url": "https://github.com/sataliulan",
"followers_url": "https://api.github.com/users/sataliulan/followers",
"following_url": "https://api.github.com/users/sataliulan/following{/other_user}",
"gists_url": "https://api.github.com/users/sataliulan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sataliulan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sataliulan/subscriptions",
"organizations_url": "https://api.github.com/users/sataliulan/orgs",
"repos_url": "https://api.github.com/users/sataliulan/repos",
"events_url": "https://api.github.com/users/sataliulan/events{/privacy}",
"received_events_url": "https://api.github.com/users/sataliulan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,616 | 1,619 | 1,619 | NONE | null | hello guys, I try to finetune bert in my own dataset (line by line txt, language:Chinese), follow the guid code of run_mlm.py example.
The tokenizer is bert pretrained tokenizer (tokenizer=AutoTokenizer.from_pretrained('bert-base-chinese')), and model is bert-base-chinese ,as folllows:
config=BertConfig.from_pretrained('bert-base-chinese')
print(config)
model=BertForMaskedLM(config=config)
when I started the trainer, I got the following errors:
Using custom data configuration default-cd6deed448eea358
Downloading and preparing dataset text/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/hcl/.cache/huggingface/datasets/text/default-cd6deed448eea358/0.0.0/293ecb642f9fca45b44ad1f90c8445c54b9d80b95ab3fca3cfa5e1e3d85d4a57...
Dataset text downloaded and prepared to /home/hcl/.cache/huggingface/datasets/text/default-cd6deed448eea358/0.0.0/293ecb642f9fca45b44ad1f90c8445c54b9d80b95ab3fca3cfa5e1e3d85d4a57. Subsequent calls will reuse this data.
100%|██████████| 3264/3264 [01:06<00:00, 48.83ba/s]
0%| | 0/3264 [00:00<?, ?ba/s]
Traceback (most recent call last):
File "/home/hcl/miniconda3/envs/pytorch/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1582, in _map_single
writer.write_batch(batch)
File "/home/hcl/miniconda3/envs/pytorch/lib/python3.7/site-packages/datasets/arrow_writer.py", line 276, in write_batch
pa_table = pa.Table.from_pydict(typed_sequence_examples)
File "pyarrow/table.pxi", line 1559, in pyarrow.lib.Table.from_pydict
File "pyarrow/array.pxi", line 331, in pyarrow.lib.asarray
File "pyarrow/array.pxi", line 222, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/home/hcl/miniconda3/envs/pytorch/lib/python3.7/site-packages/datasets/arrow_writer.py", line 98, in __arrow_array__
if trying_type and out[0].as_py() != self.data[0]:
File "pyarrow/array.pxi", line 1067, in pyarrow.lib.Array.__getitem__
File "pyarrow/array.pxi", line 549, in pyarrow.lib._normalize_index
IndexError: index out of bounds
python-BaseException
for now, I don't know why, the first epoch runs well.
Any helps?
ps: my os is deepin15; I choose nvidia rtx2080ti as my gpu, and finetune my dataset on pytorch1.6 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10841/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10841/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10840 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10840/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10840/comments | https://api.github.com/repos/huggingface/transformers/issues/10840/events | https://github.com/huggingface/transformers/issues/10840 | 837,249,959 | MDU6SXNzdWU4MzcyNDk5NTk= | 10,840 | why My Albert pretrain loss can't decrease? | {
"login": "wa008",
"id": 29834520,
"node_id": "MDQ6VXNlcjI5ODM0NTIw",
"avatar_url": "https://avatars.githubusercontent.com/u/29834520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wa008",
"html_url": "https://github.com/wa008",
"followers_url": "https://api.github.com/users/wa008/followers",
"following_url": "https://api.github.com/users/wa008/following{/other_user}",
"gists_url": "https://api.github.com/users/wa008/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wa008/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wa008/subscriptions",
"organizations_url": "https://api.github.com/users/wa008/orgs",
"repos_url": "https://api.github.com/users/wa008/repos",
"events_url": "https://api.github.com/users/wa008/events{/privacy}",
"received_events_url": "https://api.github.com/users/wa008/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1897896961,
"node_id": "MDU6TGFiZWwxODk3ODk2OTYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Migration",
"name": "Migration",
"color": "e99695",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Hi! ALBERT is known to have issues converging in some cases as all layers are shared. See the following issues for similar issues and potential resolutions:\r\n\r\nhttps://github.com/huggingface/transformers/issues/5984\r\nhttps://github.com/huggingface/transformers/issues/4727\r\nhttps://github.com/huggingface/transformers/issues/2553",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,616 | 1,619 | 1,619 | NONE | null | # 📚 Migration
## Information
<!-- Important information -->
Model i am using Albert
Language I am using the model on just digits(Desensitized Chinese)
The problem arises when using: Albert Trainer
## Details
The loss can decrease normal when I take this config to RoBerta, I use Albert replace RoBerta, The loss can't decrease, I don't know what's the problem, please help
```
%%time
bert_file = './albert'
from transformers import Trainer, TrainingArguments
from transformers import LineByLineTextDataset, DataCollatorForLanguageModeling
from transformers import AlbertConfig, AlbertForMaskedLM
config = AlbertConfig(
hidden_size = 768,
num_attention_heads = 12,
intermediate_size = 3072,
vocab_size = vocab_size + 10
)
model = AlbertForMaskedLM(config=config)
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer, mlm=True, mlm_probability=0.15
)
train_dataset = LineByLineTextDataset(
tokenizer=tokenizer,
file_path="./my_wiki_file",
block_size=128,
)
training_args = TrainingArguments(
output_dir=bert_file,
overwrite_output_dir=True,
num_train_epochs=40,
per_device_train_batch_size=64,
save_steps=100000,
save_total_limit=2,
prediction_loss_only=False,
)
%%time
trainer = Trainer(
model = model,
args = training_args,
data_collator = data_collator,
train_dataset = train_dataset
)
trainer.train()
```
This is result
```
Step Training Loss
500 6.687300
1000 4.034700
1500 3.826200
2000 3.777200
2500 3.788800
3000 3.751100
3500 3.780000
4000 3.772900
4500 3.795800
5000 3.737000
5500 3.782300
6000 3.775600
6500 3.821400
7000 3.730200
7500 3.751700
8000 3.787000
8500 3.824500
9000 3.746300
9500 3.782600
10000 3.770600
```
## Environment info
- `transformers` version:4.5.0.dev0
- Platform:Kaggle notebook
- Python version:3.7
- PyTorch version (GPU?):
- torch version: 1.8.0
- Using GPU in script?:YES
- Using distributed or parallel set-up in script?:NO
## Checklist
- [ ] I have read the migration guide in the readme.
([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers);
[pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers))
- [ ] I checked if a related official extension example runs on my machine.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10840/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10840/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10839 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10839/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10839/comments | https://api.github.com/repos/huggingface/transformers/issues/10839/events | https://github.com/huggingface/transformers/pull/10839 | 837,202,751 | MDExOlB1bGxSZXF1ZXN0NTk3NTc3NzUx | 10,839 | Fix on_step_begin and on_step_end Callback Sequencing | {
"login": "siddk",
"id": 2498509,
"node_id": "MDQ6VXNlcjI0OTg1MDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2498509?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/siddk",
"html_url": "https://github.com/siddk",
"followers_url": "https://api.github.com/users/siddk/followers",
"following_url": "https://api.github.com/users/siddk/following{/other_user}",
"gists_url": "https://api.github.com/users/siddk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/siddk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/siddk/subscriptions",
"organizations_url": "https://api.github.com/users/siddk/orgs",
"repos_url": "https://api.github.com/users/siddk/repos",
"events_url": "https://api.github.com/users/siddk/events{/privacy}",
"received_events_url": "https://api.github.com/users/siddk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,616 | 1,616 | CONTRIBUTOR | null | # What does this PR do?
Currently, the Trainer exhibits the following behavior (simplified):
```
for step, input in epoch_iterator:
if (step + 1) % self.args.gradient_accumulation_steps == 0:
callback_handler.on_step_begin()
...
if (step + 1) % self.args.gradient_accumulation_steps == 0:
# Apply Gradient Update (Finished accumulating)
optimizer.step()
callback_handler.on_step_end()
```
Unfortunately, this means that `on_step_begin()` gets called during the same iteration, *before* `on_step_end()` which is incorrect, and confuses folks implementing custom callbacks for timing individual iterations (like my team!).
Instead, this fix starts by calling `on_step_begin()` at steps = 0 (iteration 0) and will only be called on the next step after `on_step_end()`.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
Code updates part of the Trainer, so tagging @sgugger.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10839/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10839/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10839",
"html_url": "https://github.com/huggingface/transformers/pull/10839",
"diff_url": "https://github.com/huggingface/transformers/pull/10839.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10839.patch",
"merged_at": 1616418939000
} |
https://api.github.com/repos/huggingface/transformers/issues/10838 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10838/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10838/comments | https://api.github.com/repos/huggingface/transformers/issues/10838/events | https://github.com/huggingface/transformers/issues/10838 | 837,188,864 | MDU6SXNzdWU4MzcxODg4NjQ= | 10,838 | Can’t download the pre-trained pegasus-large model | {
"login": "xiaohy9",
"id": 75334329,
"node_id": "MDQ6VXNlcjc1MzM0MzI5",
"avatar_url": "https://avatars.githubusercontent.com/u/75334329?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiaohy9",
"html_url": "https://github.com/xiaohy9",
"followers_url": "https://api.github.com/users/xiaohy9/followers",
"following_url": "https://api.github.com/users/xiaohy9/following{/other_user}",
"gists_url": "https://api.github.com/users/xiaohy9/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xiaohy9/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiaohy9/subscriptions",
"organizations_url": "https://api.github.com/users/xiaohy9/orgs",
"repos_url": "https://api.github.com/users/xiaohy9/repos",
"events_url": "https://api.github.com/users/xiaohy9/events{/privacy}",
"received_events_url": "https://api.github.com/users/xiaohy9/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @xiaohy9,\r\n\r\nThanks for the issue! It should be resolved now. See: https://huggingface.co/google/pegasus-large/commit/4510ba69cc183d23e892e7728a40fdcf42e83079 . \r\n\r\nCould you try again? ",
"Yes, it works now. Thanks for the quick response!\r\nHowever, I saw some similar issue using pegasus-large as using pegasus-xsum, with details mentioned here: https://github.com/huggingface/transformers/issues/10837",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,616 | 1,619 | 1,619 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.4.2
- Platform:
- Python version: 3.7.10
- PyTorch version (GPU?):
- Tensorflow version (GPU?): 2.4.1
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?:
## Who can help
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
## Information
It appears that the huggingface.co model url has some problem.
Code:
```
from transformers import AutoTokenizer, TFAutoModelForSeq2SeqLM
tokenizer2 = AutoTokenizer.from_pretrained("google/pegasus-large")
model2 = TFAutoModelForSeq2SeqLM.from_pretrained("google/pegasus-large")
inputs1_2 = tokenizer2.encode("summarize: " + text1, return_tensors="tf", max_length=1024)
outputs1_2 = model2.generate(inputs1_2, max_length=150, min_length=40, length_penalty=2.0, num_beams=4, early_stopping=True)
outputs1_2, tokenizer2.decode(outputs1_2[0])
```
error message:
```
404 Client Error: Not Found for url: https://huggingface.co/google/pegasus-large/resolve/main/tf_model.h5
…
OSError: Can't load weights for 'google/pegasus-large'. Make sure that:
- 'google/pegasus-large' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'google/pegasus-large' is the correct path to a directory containing a file named one of tf_model.h5, pytorch_model.bin.
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10838/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10838/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10837 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10837/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10837/comments | https://api.github.com/repos/huggingface/transformers/issues/10837/events | https://github.com/huggingface/transformers/issues/10837 | 837,187,665 | MDU6SXNzdWU4MzcxODc2NjU= | 10,837 | pegasus-xsum summarized a story of Eiffel Tower into one on the World Trade Center | {
"login": "xiaohy9",
"id": 75334329,
"node_id": "MDQ6VXNlcjc1MzM0MzI5",
"avatar_url": "https://avatars.githubusercontent.com/u/75334329?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiaohy9",
"html_url": "https://github.com/xiaohy9",
"followers_url": "https://api.github.com/users/xiaohy9/followers",
"following_url": "https://api.github.com/users/xiaohy9/following{/other_user}",
"gists_url": "https://api.github.com/users/xiaohy9/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xiaohy9/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiaohy9/subscriptions",
"organizations_url": "https://api.github.com/users/xiaohy9/orgs",
"repos_url": "https://api.github.com/users/xiaohy9/repos",
"events_url": "https://api.github.com/users/xiaohy9/events{/privacy}",
"received_events_url": "https://api.github.com/users/xiaohy9/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @xiaoda99,\r\n\r\nYou should **not** append a `summarize: ` prefix for Pegasus.\r\n\r\nRunning this code:\r\n\r\n```python\r\ntext1=\"The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct.\" \r\n\r\nfrom transformers import AutoTokenizer, TFAutoModelForSeq2SeqLM\r\ntokenizer1 = AutoTokenizer.from_pretrained(\"google/pegasus-xsum\") \r\nmodel1 = TFAutoModelForSeq2SeqLM.from_pretrained(\"google/pegasus-xsum\")\r\ninputs1_1 = tokenizer1.encode(text1, return_tensors=\"tf\", max_length=1024)\r\noutputs1_1 = model1.generate(inputs1_1, max_length=150, min_length=40, length_penalty=2.0, num_beams=4, early_stopping=True)\r\ntokenizer1.decode(outputs1_1[0]) \r\n```\r\n\r\ngives me better results: \r\n\r\n```\r\n\"<pad> The Eiffel Tower is a free-standing structure in Paris, France, built in 1889 by Gustave Eiffel as a monument to his country's national symbol, the Eiffel Tower, which was later renamed the Louvre.\"\r\n```",
"@patrickvonplaten ,\r\nthanks for the response. The summary you posted is about Eiffel Tower, but the information is not really from the input text. The same problem still exists, it spit out some different story than the one in the input, which is likely from the original training data.\r\n\r\nCan you check on why this happens? thanks.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,616 | 1,619 | 1,619 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.4.2
- Platform:
- Python version: 3.7.10
- PyTorch version (GPU?):
- Tensorflow version (GPU?): 2.4.1
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?:
## Who can help
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
## Information
The wired thing happened when I tried model pegasus-xsum for text summarization using the example code and data. I noticed that the output describes a similar but obviously different story than the one in the input. I expected to see some description of the Eiffel Tower, but the output is all about New York's World Trade Center!!
I noticed that the online demo version works fine, and the summary output is still on the Eiffel Tower.
https://huggingface.co/google/pegasus-xsum
It appears that pegasus-xsum model in my code generated the summary from some training data, but not the input I gave (retained memory?). How can I git the model behave normally like the online version?
The code I used (adopted from the online demo page):
```
text1="The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct."
from transformers import AutoTokenizer, TFAutoModelForSeq2SeqLM
tokenizer1 = AutoTokenizer.from_pretrained("google/pegasus-xsum")
model1 = TFAutoModelForSeq2SeqLM.from_pretrained("google/pegasus-xsum")
inputs1_1 = tokenizer1.encode("summarize: " + text1, return_tensors="tf", max_length=1024)
outputs1_1 = model1.generate(inputs1_1, max_length=150, min_length=40, length_penalty=2.0, num_beams=4, early_stopping=True)
tokenizer1.decode(outputs1_1[0])
```
the output I got:
`"<pad> New York's World Trade Center is the tallest building in the United States and one of the world's tallest structures, with a total height of 1,776ft (541m), according to Guinness World Records."`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10837/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10837/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10836 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10836/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10836/comments | https://api.github.com/repos/huggingface/transformers/issues/10836/events | https://github.com/huggingface/transformers/issues/10836 | 837,170,205 | MDU6SXNzdWU4MzcxNzAyMDU= | 10,836 | Generating text with MBart Large 50 on GPU with Tensorflow is significantly slower than with Pytorch | {
"login": "xhluca",
"id": 21180505,
"node_id": "MDQ6VXNlcjIxMTgwNTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/21180505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xhluca",
"html_url": "https://github.com/xhluca",
"followers_url": "https://api.github.com/users/xhluca/followers",
"following_url": "https://api.github.com/users/xhluca/following{/other_user}",
"gists_url": "https://api.github.com/users/xhluca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xhluca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xhluca/subscriptions",
"organizations_url": "https://api.github.com/users/xhluca/orgs",
"repos_url": "https://api.github.com/users/xhluca/repos",
"events_url": "https://api.github.com/users/xhluca/events{/privacy}",
"received_events_url": "https://api.github.com/users/xhluca/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It seems that the generation function is handled by the [`TFGenerationMixin`](https://github.com/huggingface/transformers/blob/696e8a43655a63b7312e036616f4abd2106e179e/src/transformers/generation_tf_utils.py#L48-L72) whereas in torch it is handled by [`GenerationMixin`](https://github.com/huggingface/transformers/blob/d4d4447d536e5cf8c78518b8b3359168346a4134/src/transformers/generation_utils.py#L665-L699); quickly glancing over the code I notice that the implementation is different. Could there be a discrepancy in implementation that would affect the generation speed?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,616 | 1,619 | 1,619 | CONTRIBUTOR | null | This applies to the [MBart Large MMT 50-language model](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt). Takes about 1m20s to process 10 batches of size 16 x 171 on pytorch, but 8min for tensorflow. Both are running on P100 through Kaggle.
- [Tensorflow notebook](https://www.kaggle.com/xhlulu/tf-mbartforconditionalgeneration-speed-test)
- [Pytorch Notebook](https://www.kaggle.com/xhlulu/mbartforconditionalgeneration-speed-test)
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.4.2
- Platform: Linux-5.4.89+-x86_64-with-debian-buster-sid
- Python version: 3.7.9
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): 2.4.1 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
## Code for pytorch
```python
from tqdm.auto import tqdm
import torch
from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
article_hi = "संयुक्त राष्ट्र के प्रमुख का कहना है कि सीरिया में कोई सैन्य समाधान नहीं है"
model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-many-mmt")
tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-many-mmt")
# translate Hindi to French
tokenizer.src_lang = "hi_IN"
for i in tqdm(range(10)):
encoded_hi = tokenizer([article_hi*10]*16, return_tensors="pt")
generated_tokens = model.generate(
**encoded_hi,
forced_bos_token_id=tokenizer.lang_code_to_id["fr_XX"]
)
out = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)[0]
# => "Le chef de l 'ONU affirme qu 'il n 'y a pas de solution militaire dans la Syrie."
```
## Code for Tensorflow
```python
from tqdm.auto import tqdm
import tensorflow as tf
from transformers import TFMBartForConditionalGeneration, MBart50TokenizerFast
strategy = tf.strategy.get_strategy()
article_hi = "संयुक्त राष्ट्र के प्रमुख का कहना है कि सीरिया में कोई सैन्य समाधान नहीं है"
with strategy.scope():
model = TFMBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-many-mmt", from_pt=True)
tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-many-mmt")
# translate Hindi to French
tokenizer.src_lang = "hi_IN"
with strategy.scope():
for i in tqdm(range(10)):
encoded_hi = tokenizer([article_hi*10]*16, return_tensors="tf")
generated_tokens = model.generate(
**encoded_hi,
forced_bos_token_id=tokenizer.lang_code_to_id["fr_XX"]
)
out = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)[0]
# => "Le chef de l 'ONU affirme qu 'il n 'y a pas de solution militaire dans la Syrie."
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10836/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10836/timeline | completed | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.