url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
list
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
โŒ€
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
โŒ€
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/24920
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24920/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24920/comments
https://api.github.com/repos/huggingface/transformers/issues/24920/events
https://github.com/huggingface/transformers/pull/24920
1,811,984,587
PR_kwDOCUB6oc5V5PNi
24,920
๐ŸŒ [i18n-KO] Translated `perf_infer_cpu.md` to Korean
{ "login": "junejae", "id": 55151385, "node_id": "MDQ6VXNlcjU1MTUxMzg1", "avatar_url": "https://avatars.githubusercontent.com/u/55151385?v=4", "gravatar_id": "", "url": "https://api.github.com/users/junejae", "html_url": "https://github.com/junejae", "followers_url": "https://api.github.com/users/junejae/followers", "following_url": "https://api.github.com/users/junejae/following{/other_user}", "gists_url": "https://api.github.com/users/junejae/gists{/gist_id}", "starred_url": "https://api.github.com/users/junejae/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/junejae/subscriptions", "organizations_url": "https://api.github.com/users/junejae/orgs", "repos_url": "https://api.github.com/users/junejae/repos", "events_url": "https://api.github.com/users/junejae/events{/privacy}", "received_events_url": "https://api.github.com/users/junejae/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,690
1,690
CONTRIBUTOR
null
<!-- PR์˜ ์ œ๋ชฉ์€ "๐ŸŒ [i18n-KO] Translated `<your_file>.md` to Korean" ์œผ๋กœ ๋ถ€ํƒ๋“œ๋ฆฝ๋‹ˆ๋‹ค! --> # What does this PR do? Translated the `<your_file>.md` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 ## Before reviewing - [x] Check for missing / redundant translations (๋ฒˆ์—ญ ๋ˆ„๋ฝ/์ค‘๋ณต ๊ฒ€์‚ฌ) - [x] Grammar Check (๋งž์ถค๋ฒ• ๊ฒ€์‚ฌ) - [x] Review or Add new terms to glossary (์šฉ์–ด ํ™•์ธ ๋ฐ ์ถ”๊ฐ€) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview๋กœ ์ •์ƒ์ž‘๋™ ํ™•์ธ) ## Who can review? (Initial) <!-- 1. ์œ„ ์ฒดํฌ๊ฐ€ ๋ชจ๋‘ ์™„๋ฃŒ๋œ ๋’ค์—๋งŒ OSSCA ํŒ€์›๋“ค์—๊ฒŒ ๋ฆฌ๋ทฐ ์š”์ฒญํ•˜๋Š” ์•„๋ž˜ ์ฃผ์„์„ ๋…ธ์ถœํ•ด์ฃผ์„ธ์š”! --> <!-- Team OSSCA, may you please review this PR? --> @wonhyeongseo, @keonju2, @harheem, @HongB1, @junejae ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. ํŒ€์›๋“ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ€ ๋๋‚œ ํ›„์—๋งŒ ํ—ˆ๊น…ํŽ˜์ด์Šค ์ง์›๋“ค์—๊ฒŒ ๋ฆฌ๋ทฐ ์š”์ฒญํ•˜๋Š” ์•„๋ž˜ ์ฃผ์„์„ ๋…ธ์ถœํ•ด์ฃผ์„ธ์š”! --> May you please review this PR? @sgugger, @ArthurZucker, @eunseojo
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24920/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 1, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24920/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24920", "html_url": "https://github.com/huggingface/transformers/pull/24920", "diff_url": "https://github.com/huggingface/transformers/pull/24920.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24920.patch", "merged_at": 1690293854000 }
https://api.github.com/repos/huggingface/transformers/issues/24919
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24919/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24919/comments
https://api.github.com/repos/huggingface/transformers/issues/24919/events
https://github.com/huggingface/transformers/issues/24919
1,811,863,456
I_kwDOCUB6oc5r_teg
24,919
`VisionEncoderDecoderModel.generate()` rejects argument `interpolate_pos_encoding=True`
{ "login": "jungomi", "id": 3986846, "node_id": "MDQ6VXNlcjM5ODY4NDY=", "avatar_url": "https://avatars.githubusercontent.com/u/3986846?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jungomi", "html_url": "https://github.com/jungomi", "followers_url": "https://api.github.com/users/jungomi/followers", "following_url": "https://api.github.com/users/jungomi/following{/other_user}", "gists_url": "https://api.github.com/users/jungomi/gists{/gist_id}", "starred_url": "https://api.github.com/users/jungomi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jungomi/subscriptions", "organizations_url": "https://api.github.com/users/jungomi/orgs", "repos_url": "https://api.github.com/users/jungomi/repos", "events_url": "https://api.github.com/users/jungomi/events{/privacy}", "received_events_url": "https://api.github.com/users/jungomi/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[ { "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false } ]
[ "Hi @jungomi\r\n\r\nThank you for raising this issue! Looks valid, I will take a look quickly.", "Hi @gante \r\n\r\nIt looks we need to update `_validate_model_kwargs` in some way, so some extra but necessary (for correctness) arguments could be allowed even if not in `prepare_inputs_for_generation` (here of the generic class `VisioniEncoderDecoder`.\r\n\r\nLet me try a fix.", "@jungomi \r\n\r\nThe PR is merged. You can try the latest commit on the `main` branch if you would like to ๐Ÿค— " ]
1,689
1,690
1,690
NONE
null
### System Info - `transformers` version: 4.30.2 - Platform: Linux-5.15.0-69-generic-x86_64-with-glibc2.35 - Python version: 3.9.7 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction When trying to use a different size than the `VisionEncoderDecoderModel` was trained with, you are supposed to pass `interpolate_pos_encoding=True` to the forward method. That in itself works fine and hence the training does too, but when using the `generate()` method, it rejects `interpolate_pos_encoding` during the validation of the model kwargs. For example using TrOCR, which was trained on an image size of 384x384, and changing the input size of the images to 128x768: ```py import torch from transformers import VisionEncoderDecoderModel model = VisionEncoderDecoderModel.from_pretrained("microsoft/trocr-base-handwritten") images = torch.rand((1, 3, 128, 768)) # interpolate_pos_encoding=True should be passed to the forward method in order to use # different image sizes, but the validation of .generate() rejects it. model.generate(pixel_values=images, interpolate_pos_encoding=True) ``` Produces the error: ``` โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ Traceback (most recent call last) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ /home/mjungo/ocr/summer-school/repro_interpolate_pos_encoding.py:10 in <module> โ”‚ โ”‚ โ”‚ โ”‚ 7 โ”‚ โ”‚ 8 # interpolate_pos_encoding=True should be passed to the forward method in order to use โ”‚ โ”‚ 9 # different image sizes, but the validation of .generate() rejects it. โ”‚ โ”‚ โฑ 10 model.generate(pixel_values=images, interpolate_pos_encoding=True) โ”‚ โ”‚ 11 โ”‚ โ”‚ โ”‚ โ”‚ /home/mjungo/miniconda3/lib/python3.9/site-packages/torch/utils/_contextlib.py:115 in โ”‚ โ”‚ decorate_context โ”‚ โ”‚ โ”‚ โ”‚ 112 โ”‚ @functools.wraps(func) โ”‚ โ”‚ 113 โ”‚ def decorate_context(*args, **kwargs): โ”‚ โ”‚ 114 โ”‚ โ”‚ with ctx_factory(): โ”‚ โ”‚ โฑ 115 โ”‚ โ”‚ โ”‚ return func(*args, **kwargs) โ”‚ โ”‚ 116 โ”‚ โ”‚ โ”‚ 117 โ”‚ return decorate_context โ”‚ โ”‚ 118 โ”‚ โ”‚ โ”‚ โ”‚ /home/mjungo/miniconda3/lib/python3.9/site-packages/transformers/generation/utils.py:1271 in โ”‚ โ”‚ generate โ”‚ โ”‚ โ”‚ โ”‚ 1268 โ”‚ โ”‚ generation_config = copy.deepcopy(generation_config) โ”‚ โ”‚ 1269 โ”‚ โ”‚ model_kwargs = generation_config.update(**kwargs) # All unused kwargs must be m โ”‚ โ”‚ 1270 โ”‚ โ”‚ generation_config.validate() โ”‚ โ”‚ โฑ 1271 โ”‚ โ”‚ self._validate_model_kwargs(model_kwargs.copy()) โ”‚ โ”‚ 1272 โ”‚ โ”‚ โ”‚ โ”‚ 1273 โ”‚ โ”‚ # 2. Set generation parameters if not already defined โ”‚ โ”‚ 1274 โ”‚ โ”‚ logits_processor = logits_processor if logits_processor is not None else LogitsP โ”‚ โ”‚ โ”‚ โ”‚ /home/mjungo/miniconda3/lib/python3.9/site-packages/transformers/generation/utils.py:1144 in โ”‚ โ”‚ _validate_model_kwargs โ”‚ โ”‚ โ”‚ โ”‚ 1141 โ”‚ โ”‚ โ”‚ โ”‚ unused_model_args.append(key) โ”‚ โ”‚ 1142 โ”‚ โ”‚ โ”‚ โ”‚ 1143 โ”‚ โ”‚ if unused_model_args: โ”‚ โ”‚ โฑ 1144 โ”‚ โ”‚ โ”‚ raise ValueError( โ”‚ โ”‚ 1145 โ”‚ โ”‚ โ”‚ โ”‚ f"The following `model_kwargs` are not used by the model: {unused_model_ โ”‚ โ”‚ 1146 โ”‚ โ”‚ โ”‚ โ”‚ " generate arguments will also show up in this list)" โ”‚ โ”‚ 1147 โ”‚ โ”‚ โ”‚ ) โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ ValueError: The following `model_kwargs` are not used by the model: ['interpolate_pos_encoding'] (note: typos in the generate arguments will also show up in this list) ``` ### Expected behavior It was expected to accept `interpolate_pos_encoding=True` and pass it to the forward method. Note: When removing the verification check, it works as expected, so this is purely an issue of the validation being incorrect, i.e. missing the additional arguments that are allowed for this kind of model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24919/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24919/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24918
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24918/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24918/comments
https://api.github.com/repos/huggingface/transformers/issues/24918/events
https://github.com/huggingface/transformers/issues/24918
1,811,832,040
I_kwDOCUB6oc5r_lzo
24,918
Inconsistent results between LlamaTokenizer and LlamaTokenizerFast
{ "login": "BugWriter2", "id": 26344199, "node_id": "MDQ6VXNlcjI2MzQ0MTk5", "avatar_url": "https://avatars.githubusercontent.com/u/26344199?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BugWriter2", "html_url": "https://github.com/BugWriter2", "followers_url": "https://api.github.com/users/BugWriter2/followers", "following_url": "https://api.github.com/users/BugWriter2/following{/other_user}", "gists_url": "https://api.github.com/users/BugWriter2/gists{/gist_id}", "starred_url": "https://api.github.com/users/BugWriter2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BugWriter2/subscriptions", "organizations_url": "https://api.github.com/users/BugWriter2/orgs", "repos_url": "https://api.github.com/users/BugWriter2/repos", "events_url": "https://api.github.com/users/BugWriter2/events{/privacy}", "received_events_url": "https://api.github.com/users/BugWriter2/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! Sorry but I cannot reproduce your issue:\r\n\r\n```python \r\nIn [2]: slow_tokenizer.encode(input_text)\r\nOut[2]: [1, 8931]\r\n\r\nIn [3]: fast_tokenizer.encode(input_text)\r\nOut[3]: [1, 8931]\r\n``` \r\nMake sure you have the latest version of tokenizers maybe? ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,689
1,693
1,693
NONE
null
### System Info - `transformers` version: 4.30.2 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.15 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 ### Who can help? @ArthurZucker ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction ``` from transformers import LlamaTokenizerFast, LlamaTokenizer fast_tokenizer = LlamaTokenizerFast.from_pretrained('openlm-research/open_llama_7b') slow_tokenizer = LlamaTokenizer.from_pretrained('openlm-research/open_llama_7b') input_text = 'tasks' slow_tokenizer.encode(input) # [1, 8931] fast_tokenizer.encode(input) # [1, 31822, 31824, 5577] ``` ### Expected behavior .
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24918/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24918/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24917
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24917/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24917/comments
https://api.github.com/repos/huggingface/transformers/issues/24917/events
https://github.com/huggingface/transformers/pull/24917
1,811,800,495
PR_kwDOCUB6oc5V4mnn
24,917
๐ŸŒ [i18n-KO] Translated `perf_infer_gpu_one.md` to Korean
{ "login": "eenzeenee", "id": 71638597, "node_id": "MDQ6VXNlcjcxNjM4NTk3", "avatar_url": "https://avatars.githubusercontent.com/u/71638597?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eenzeenee", "html_url": "https://github.com/eenzeenee", "followers_url": "https://api.github.com/users/eenzeenee/followers", "following_url": "https://api.github.com/users/eenzeenee/following{/other_user}", "gists_url": "https://api.github.com/users/eenzeenee/gists{/gist_id}", "starred_url": "https://api.github.com/users/eenzeenee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eenzeenee/subscriptions", "organizations_url": "https://api.github.com/users/eenzeenee/orgs", "repos_url": "https://api.github.com/users/eenzeenee/repos", "events_url": "https://api.github.com/users/eenzeenee/events{/privacy}", "received_events_url": "https://api.github.com/users/eenzeenee/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,689
1,689
1,689
CONTRIBUTOR
null
<!-- PR์˜ ์ œ๋ชฉ์€ "๐ŸŒ [i18n-KO] Translated `<your_file>.md` to Korean" ์œผ๋กœ ๋ถ€ํƒ๋“œ๋ฆฝ๋‹ˆ๋‹ค! --> # What does this PR do? Translated the `perf_infer_gpu_one.md` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 ## Before reviewing - [ ] Check for missing / redundant translations (๋ฒˆ์—ญ ๋ˆ„๋ฝ/์ค‘๋ณต ๊ฒ€์‚ฌ) - [ ] Grammar Check (๋งž์ถค๋ฒ• ๊ฒ€์‚ฌ) - [ ] Review or Add new terms to glossary (์šฉ์–ด ํ™•์ธ ๋ฐ ์ถ”๊ฐ€) - [ ] Check Inline TOC (e.g. `[[lowercased-header]]`) - [ ] Check live-preview for gotchas (live-preview๋กœ ์ •์ƒ์ž‘๋™ ํ™•์ธ) ## Who can review? (Initial) <!-- 1. ์œ„ ์ฒดํฌ๊ฐ€ ๋ชจ๋‘ ์™„๋ฃŒ๋œ ๋’ค์—, ์ด ์•„๋ž˜์— ๋ฆฌ๋ทฐ๋ฅผ ์š”์ฒญํ•  ํŒ€์›๋“ค์„ ๋ฉ˜์…˜ํ•ด์ฃผ์„ธ์š”! --> May you please review this PR? @sronger @TaeYupNoh @HanNayeoniee @sim-so ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. ํŒ€์›๋“ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ€ ๋๋‚œ ํ›„์—๋งŒ ํ—ˆ๊น…ํŽ˜์ด์Šค ์ง์›๋“ค์—๊ฒŒ ๋ฆฌ๋ทฐ ์š”์ฒญํ•˜๋Š” ์•„๋ž˜ ์ฃผ์„์„ ๋…ธ์ถœํ•ด์ฃผ์„ธ์š”! --> <!-- May you please review this PR? @sgugger, @ArthurZucker, @eunseojo -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24917/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24917/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24917", "html_url": "https://github.com/huggingface/transformers/pull/24917", "diff_url": "https://github.com/huggingface/transformers/pull/24917.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24917.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24916
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24916/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24916/comments
https://api.github.com/repos/huggingface/transformers/issues/24916/events
https://github.com/huggingface/transformers/pull/24916
1,811,733,921
PR_kwDOCUB6oc5V4X6U
24,916
Fix `main_input_name` in `src/transformers/keras_callbacks.py`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[ { "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false } ]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@Rocketknight1 Just for you to have a final comment when you are back ๐Ÿ™ ", "Also LGTM, thanks for catching the bug!" ]
1,689
1,690
1,689
COLLABORATOR
null
# What does this PR do? Fix #24872
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24916/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24916/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24916", "html_url": "https://github.com/huggingface/transformers/pull/24916", "diff_url": "https://github.com/huggingface/transformers/pull/24916.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24916.patch", "merged_at": 1689858098000 }
https://api.github.com/repos/huggingface/transformers/issues/24915
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24915/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24915/comments
https://api.github.com/repos/huggingface/transformers/issues/24915/events
https://github.com/huggingface/transformers/issues/24915
1,811,725,683
I_kwDOCUB6oc5r_L1z
24,915
ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" }, { "id": 5769473378, "node_id": "LA_kwDOCUB6oc8AAAABV-MtYg", "url": "https://api.github.com/repos/huggingface/transformers/labels/Vision", "name": "Vision", "color": "C079EF", "default": false, "description": "" } ]
open
false
null
[]
[ "Glad you get something different to work on ๐Ÿš€ ๐Ÿ‘€ ๐ŸŽ‰ ", "Hi, @amyeroberts, I don't know if you are working on this but if not I would be more than happy to take it up.", "Oh, this is the issue page, not the PR page!", "@shauray8 You're very welcome to take this up! :) \r\n\r\nThis model presents a new task for the library, so there might be some iterations and discussions on what the inputs and outputs should look like. The model translation should be fairly straightforward though, so I'd suggest starting with a PR that implements that and then on the PR we can figure out what works best." ]
1,689
1,702
null
COLLABORATOR
null
### Model description ViTPose is used in 2D human pose estimation, a subset of the keypoint detection task #24044 It provides a simple baseline for vision transformer-based human pose estimation. It utilises a pretrained vision transformer backbone to extract features and a simple decoder head to process the extracted features. Despite no elaborate designs in the model, ViTPose obtained state-of-the-art (SOTA) performance of 80.9 AP on the MS COCO Keypoint test-dev set. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation Code and weights: https://github.com/ViTAE-Transformer/ViTPose Paper: https://arxiv.org/abs/2204.12484 @Annbless
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24915/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24915/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/24914
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24914/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24914/comments
https://api.github.com/repos/huggingface/transformers/issues/24914/events
https://github.com/huggingface/transformers/pull/24914
1,811,661,461
PR_kwDOCUB6oc5V4H4J
24,914
Fix `test_model_parallelism` for `FalconModel`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
COLLABORATOR
null
# What does this PR do? Just a device issue (when running on multi-gpu env.)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24914/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24914/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24914", "html_url": "https://github.com/huggingface/transformers/pull/24914", "diff_url": "https://github.com/huggingface/transformers/pull/24914.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24914.patch", "merged_at": 1689765496000 }
https://api.github.com/repos/huggingface/transformers/issues/24913
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24913/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24913/comments
https://api.github.com/repos/huggingface/transformers/issues/24913/events
https://github.com/huggingface/transformers/pull/24913
1,811,633,793
PR_kwDOCUB6oc5V4Bru
24,913
Remove unsupported and confusing "OpenLlama" architecture
{ "login": "tomaarsen", "id": 37621491, "node_id": "MDQ6VXNlcjM3NjIxNDkx", "avatar_url": "https://avatars.githubusercontent.com/u/37621491?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tomaarsen", "html_url": "https://github.com/tomaarsen", "followers_url": "https://api.github.com/users/tomaarsen/followers", "following_url": "https://api.github.com/users/tomaarsen/following{/other_user}", "gists_url": "https://api.github.com/users/tomaarsen/gists{/gist_id}", "starred_url": "https://api.github.com/users/tomaarsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tomaarsen/subscriptions", "organizations_url": "https://api.github.com/users/tomaarsen/orgs", "repos_url": "https://api.github.com/users/tomaarsen/repos", "events_url": "https://api.github.com/users/tomaarsen/events{/privacy}", "received_events_url": "https://api.github.com/users/tomaarsen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24913). All of your documentation changes will be reflected on that endpoint.", "No we cannot removing the architecture entirely that would be a breaking change in the library that would be unprecedented and against our philosphy/commitment to backward compatibility.\r\n\r\nEven removing it entirely from the docs seem a bit extreme. If users are confused by which class to use to load a model, they should use the Auto model API to let it select the right class for them. We can add a disclaimer in the doc page of open-llama doc or move it to the deprecated models but that's pretty much the extent of what we can do.\r\n\r\ncc @LysandreJik ", "I had such a fear - I understand your reasoning completely. Whichever way we go, we end up with a suboptimal situation. I'll let the user on Discord know that it won't be changed. \r\n\r\nI could add a disclaimer that the OpenLlama models do not use the OpenLlama architecture, but simply the Llama one.", "> I could add a disclaimer that the OpenLlama models do not use the OpenLlama architecture, but simply the Llama one.\r\n\r\nYes, that would be great!" ]
1,689
1,689
1,689
MEMBER
null
This reverts commit c2c99dc7ef5edab8f7674a1eb00cf6ac6996fd0f from #22795. Hello! # What does this PR do? Removes [Open-Llama](https://huggingface.co/docs/transformers/model_doc/open-llama), a confusingly named and unused model architecture from which all documentation links throw 404 errors. ## Motivation * The [Open-Llama-V1 model](https://huggingface.co/s-JoL/Open-Llama-V1) and [Open-Llama GitHub repository](https://github.com/s-JoL/Open-Llama) have both been removed. * There is only [one issue](https://github.com/huggingface/transformers/issues?q=OpenLlamaForCausalLM) on the transformers repo that refers to `OpenLlamaForCausalLM`. * There is only [one model](https://huggingface.co/search/full-text?q=OpenLlamaForCausalLM) on the Hub that mentions `OpenLlamaForCausalLM`. * All of the ~1900 other LLama models use the standard `LlamaForCausalLM` instead. * Users on the HF Discord have been confused by the [Open-Llama](https://huggingface.co/docs/transformers/model_doc/open-llama) documentation page. In case you are opposed out of principle, i.e. that no architectures should be removed, then we may want to at least update the documentation. However, I urge you to remove this confusing architecture. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Reviewers of the original PR: @ArthurZucker @sgugger cc: @s-JoL - Tom Aarsen
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24913/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24913/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24913", "html_url": "https://github.com/huggingface/transformers/pull/24913", "diff_url": "https://github.com/huggingface/transformers/pull/24913.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24913.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24912
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24912/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24912/comments
https://api.github.com/repos/huggingface/transformers/issues/24912/events
https://github.com/huggingface/transformers/issues/24912
1,811,536,692
I_kwDOCUB6oc5r-ds0
24,912
blip2 always decode eos_token first
{ "login": "TobiasLee", "id": 20009381, "node_id": "MDQ6VXNlcjIwMDA5Mzgx", "avatar_url": "https://avatars.githubusercontent.com/u/20009381?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TobiasLee", "html_url": "https://github.com/TobiasLee", "followers_url": "https://api.github.com/users/TobiasLee/followers", "following_url": "https://api.github.com/users/TobiasLee/following{/other_user}", "gists_url": "https://api.github.com/users/TobiasLee/gists{/gist_id}", "starred_url": "https://api.github.com/users/TobiasLee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TobiasLee/subscriptions", "organizations_url": "https://api.github.com/users/TobiasLee/orgs", "repos_url": "https://api.github.com/users/TobiasLee/repos", "events_url": "https://api.github.com/users/TobiasLee/events{/privacy}", "received_events_url": "https://api.github.com/users/TobiasLee/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @younesbelkada ", "Hi @TobiasLee \r\nThanks for the issue. I think this is expected, the way Blip2 has been trained always prepends the BOS (Beginning of Sentence) token to the input text due to the underlying tokenizer they use. If conditional text is added to the input, the tokenizer should normally take care of adding that token to the beginning of the sentence. \r\n\r\nSee this line for reference: https://github.com/huggingface/transformers/blob/e75cb0cb3c5fef887abea6f099252e59a659af9d/src/transformers/models/blip_2/modeling_blip_2.py#L1837 / that manually adds the BOS token if no text is passed\r\n\r\nAnd this line from the original code: https://github.com/salesforce/LAVIS/blob/main/lavis/models/blip2_models/blip2_opt.py#L219 that uses the opt token which adds the bos token to any input by default according to the tokenizer_config file: https://huggingface.co/facebook/opt-350m/raw/main/tokenizer_config.json ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,689
1,693
1,693
CONTRIBUTOR
null
### System Info - `transformers` version: 4.31.0 - Platform: Linux-5.19.0-41-generic-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I just run the code provided by the example given in the doc: https://huggingface.co/docs/transformers/main/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example ### Expected behavior I got exactly the same caption of the image. generated_text 'two cats laying on a couch' But its weird that generated ids look like: generated_ids: tensor([[ 2, 7109, 10017, 11963, 15, 10, 16433, 50118]]) where he leading `2` means a `</s>` is generated. Is this expected or something wrong with the decoding? @amyeroberts @ArthurZucker do you guys have any ideas on it?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24912/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24912/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24911
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24911/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24911/comments
https://api.github.com/repos/huggingface/transformers/issues/24911/events
https://github.com/huggingface/transformers/pull/24911
1,811,486,040
PR_kwDOCUB6oc5V3hBZ
24,911
๐ŸŒ [i18n-KO] Translated `perf_train_cpu.md` to Korean
{ "login": "seank021", "id": 127049663, "node_id": "U_kgDOB5Kfvw", "avatar_url": "https://avatars.githubusercontent.com/u/127049663?v=4", "gravatar_id": "", "url": "https://api.github.com/users/seank021", "html_url": "https://github.com/seank021", "followers_url": "https://api.github.com/users/seank021/followers", "following_url": "https://api.github.com/users/seank021/following{/other_user}", "gists_url": "https://api.github.com/users/seank021/gists{/gist_id}", "starred_url": "https://api.github.com/users/seank021/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/seank021/subscriptions", "organizations_url": "https://api.github.com/users/seank021/orgs", "repos_url": "https://api.github.com/users/seank021/repos", "events_url": "https://api.github.com/users/seank021/events{/privacy}", "received_events_url": "https://api.github.com/users/seank021/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "๋ฆฌ๋ทฐ ์‚ฌํ•ญ ์—†์Šต๋‹ˆ๋‹ค!", "_The documentation is not available anymore as the PR was closed or merged._", "์ˆ˜๊ณ  ๋งŽ์œผ์…จ์Šต๋‹ˆ๋‹ค!!", "์ €๋„ @0525hhgus ๋‹˜์˜ ๋Œ“๊ธ€ ๋ถ€๋ถ„ ์™ธ์—๋Š” ๋ณ„๋„์˜ ๋ฆฌ๋ทฐ ์‚ฌํ•ญ ์—†์Šต๋‹ˆ๋‹ค :)" ]
1,689
1,690
1,690
CONTRIBUTOR
null
<!-- PR์˜ ์ œ๋ชฉ์€ "๐ŸŒ [i18n-KO] Translated `<your_file>.md` to Korean" ์œผ๋กœ ๋ถ€ํƒ๋“œ๋ฆฝ๋‹ˆ๋‹ค --> # What does this PR do? Translated the `perf_train_cpu.md` file of the documentation to Korean ๐Ÿ˜„ Thank you in advance for your review! Part of https://github.com/huggingface/transformers/issues/20179 <!-- ๋ฉ”์ธ ์ด์Šˆ์— ๊ธฐ๋ก์ด ๋‚จ์•„์š”! ๊ฐ€์งœ์—ฐ๊ตฌ์†Œ ๋ฆฌํฌ๋ฅผ ์‚ฌ์šฉํ•ด ์—ฐ์Šตํ•˜์‹ค๋•Œ๋Š” ์ œ๊ฑฐํ•ด์ฃผ์‹œ๋ฉด ๊ฐ์‚ฌํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค! :smile: --> ## Before reviewing - [x] Check for missing / redundant translations (๋ฒˆ์—ญ ๋ˆ„๋ฝ/์ค‘๋ณต ๊ฒ€์‚ฌ) - [x] Grammar Check (๋งž์ถค๋ฒ• ๊ฒ€์‚ฌ) - [x] Review or Add new terms to glossary (์šฉ์–ด ํ™•์ธ ๋ฐ ์ถ”๊ฐ€) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview๋กœ ์ •์ƒ์ž‘๋™ ํ™•์ธ) ## Who can review? (Initial) <!-- 1. ์œ„ ์ฒดํฌ๊ฐ€ ๋ชจ๋‘ ์™„๋ฃŒ๋œ ๋’ค์—๋งŒ ๊ฐ€์งœ์—ฐ๊ตฌ์†Œ ํŒ€์›๋“ค์—๊ฒŒ ๋ฆฌ๋ทฐ ์š”์ฒญํ•˜๋Š” ์•„๋ž˜ ์ฃผ์„์„ ๋…ธ์ถœํ•ด์ฃผ์„ธ์š”! --> <!-- Team PseudoLab, may you please review this PR? --> @kihoon71, @0525hhgus, @54data, @Sunmin0520, @seank021, @augustinLib ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. ๊ฐ€์งœ์—ฐ๊ตฌ์†Œ ํŒ€์›๋“ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ€ ๋๋‚œ ํ›„์—๋งŒ ํ—ˆ๊น…ํŽ˜์ด์Šค ์ง์›๋“ค์—๊ฒŒ ๋ฆฌ๋ทฐ ์š”์ฒญํ•˜๋Š” ์•„๋ž˜ ์ฃผ์„์„ ๋…ธ์ถœํ•ด์ฃผ์„ธ์š”! --> May you please review this PR? @sgugger, @ArthurZucker, @eunseojo
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24911/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24911/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24911", "html_url": "https://github.com/huggingface/transformers/pull/24911", "diff_url": "https://github.com/huggingface/transformers/pull/24911.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24911.patch", "merged_at": 1690214054000 }
https://api.github.com/repos/huggingface/transformers/issues/24910
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24910/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24910/comments
https://api.github.com/repos/huggingface/transformers/issues/24910/events
https://github.com/huggingface/transformers/issues/24910
1,811,410,255
I_kwDOCUB6oc5r9-1P
24,910
labels should not be same as input_ids for casual language model
{ "login": "thomas010", "id": 7474360, "node_id": "MDQ6VXNlcjc0NzQzNjA=", "avatar_url": "https://avatars.githubusercontent.com/u/7474360?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomas010", "html_url": "https://github.com/thomas010", "followers_url": "https://api.github.com/users/thomas010/followers", "following_url": "https://api.github.com/users/thomas010/following{/other_user}", "gists_url": "https://api.github.com/users/thomas010/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomas010/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomas010/subscriptions", "organizations_url": "https://api.github.com/users/thomas010/orgs", "repos_url": "https://api.github.com/users/thomas010/repos", "events_url": "https://api.github.com/users/thomas010/events{/privacy}", "received_events_url": "https://api.github.com/users/thomas010/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@thomas010 did you mean to provide the same code for `result['input_ids']` and `result['labels']`?\r\n\r\nI don't know if this helps you, but in standard causal language modeling, the input sequence and labels _should_ be the same. Suppose the sequence is <token_A>, <token_B>, <token_C>. In step 0, we want to predict <token_A>. In Step 1, we want to predict <token_B> from <token_A>. In step 2, we want to predict <token_C> from <token_A>, <token_B>. `transformers` knows this is what we want when we provide the same sequence as `input_ids` and `labels`.", "i got it, thank you" ]
1,689
1,689
1,689
NONE
null
### System Info in the example for casual language mode pretraing (examples/pytorch/language-modeling/run_clm.py: line 490), labels in 'result' should not be the same as input_ids, but maybe as follows ```` input_ids = result["input_ids"] result["input_ids"] = input_ids[:, :-1].contiguous() result["labels"]=input_ids[:, :-1].contiguous() ```` ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction examples/pytorch/language-modeling/run_clm.py: line 490 ### Expected behavior update
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24910/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24910/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24909
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24909/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24909/comments
https://api.github.com/repos/huggingface/transformers/issues/24909/events
https://github.com/huggingface/transformers/pull/24909
1,811,407,194
PR_kwDOCUB6oc5V3Pz1
24,909
Fix minor llama2.md model doc typos
{ "login": "tmc", "id": 3977, "node_id": "MDQ6VXNlcjM5Nzc=", "avatar_url": "https://avatars.githubusercontent.com/u/3977?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tmc", "html_url": "https://github.com/tmc", "followers_url": "https://api.github.com/users/tmc/followers", "following_url": "https://api.github.com/users/tmc/following{/other_user}", "gists_url": "https://api.github.com/users/tmc/gists{/gist_id}", "starred_url": "https://api.github.com/users/tmc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tmc/subscriptions", "organizations_url": "https://api.github.com/users/tmc/orgs", "repos_url": "https://api.github.com/users/tmc/repos", "events_url": "https://api.github.com/users/tmc/events{/privacy}", "received_events_url": "https://api.github.com/users/tmc/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
CONTRIBUTOR
null
Fixes typos in the llama2 model doc
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24909/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24909/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24909", "html_url": "https://github.com/huggingface/transformers/pull/24909", "diff_url": "https://github.com/huggingface/transformers/pull/24909.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24909.patch", "merged_at": 1689768794000 }
https://api.github.com/repos/huggingface/transformers/issues/24908
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24908/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24908/comments
https://api.github.com/repos/huggingface/transformers/issues/24908/events
https://github.com/huggingface/transformers/pull/24908
1,811,381,247
PR_kwDOCUB6oc5V3KOM
24,908
[`Llama2`] Add support for Llama 2: Fix convert_llama_weights_to_hf.โ€ฆ
{ "login": "linggong2023", "id": 130736106, "node_id": "U_kgDOB8rf6g", "avatar_url": "https://avatars.githubusercontent.com/u/130736106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/linggong2023", "html_url": "https://github.com/linggong2023", "followers_url": "https://api.github.com/users/linggong2023/followers", "following_url": "https://api.github.com/users/linggong2023/following{/other_user}", "gists_url": "https://api.github.com/users/linggong2023/gists{/gist_id}", "starred_url": "https://api.github.com/users/linggong2023/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/linggong2023/subscriptions", "organizations_url": "https://api.github.com/users/linggong2023/orgs", "repos_url": "https://api.github.com/users/linggong2023/repos", "events_url": "https://api.github.com/users/linggong2023/events{/privacy}", "received_events_url": "https://api.github.com/users/linggong2023/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "What does `f` mean in model size? Does it mean `fine-tuned`?", "I understand that \"f\" should be modified to \"-CHAT\" to support the CHAT\r\nmodel. I have made further modifications to the relevant code, replacing\r\nall instances of \"f\" with \"-CHAT\". Could you please reload the same commit\r\nfor me?\r\n\r\nOn Wed, Jul 19, 2023 at 4:06โ€ฏPM Ahmad Fahadh Ilyas ***@***.***>\r\nwrote:\r\n\r\n> What does f mean in model size? Does it mean fine-tuned?\r\n>\r\n> โ€”\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/pull/24908#issuecomment-1641622215>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/A7FN72XG322S2MFMCMRBO23XQ6ISFANCNFSM6AAAAAA2PQNSRQ>\r\n> .\r\n> You are receiving this because you authored the thread.Message ID:\r\n> ***@***.***>\r\n>\r\n", "cc @ArthurZucker ", "I also think the same way. At first, I couldn't understand 7Bf, but now it seems that these things are useless. I just deleted 7Bf, 13Bf, and 70Bf.", "@sgugger I have finished the modifications, ready for merging.", "No no, why would you remove the `70Bf`? All the `f` are for the finetuned/chat checkpoints" ]
1,689
1,689
1,689
NONE
null
โ€ฆpy support 7Bf # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24908/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24908/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24908", "html_url": "https://github.com/huggingface/transformers/pull/24908", "diff_url": "https://github.com/huggingface/transformers/pull/24908.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24908.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24907
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24907/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24907/comments
https://api.github.com/repos/huggingface/transformers/issues/24907/events
https://github.com/huggingface/transformers/pull/24907
1,811,284,856
PR_kwDOCUB6oc5V21fw
24,907
Fixed issue where ACCELERATE_USE_CPU="False" results in bool(True)
{ "login": "madhavajay", "id": 2882739, "node_id": "MDQ6VXNlcjI4ODI3Mzk=", "avatar_url": "https://avatars.githubusercontent.com/u/2882739?v=4", "gravatar_id": "", "url": "https://api.github.com/users/madhavajay", "html_url": "https://github.com/madhavajay", "followers_url": "https://api.github.com/users/madhavajay/followers", "following_url": "https://api.github.com/users/madhavajay/following{/other_user}", "gists_url": "https://api.github.com/users/madhavajay/gists{/gist_id}", "starred_url": "https://api.github.com/users/madhavajay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/madhavajay/subscriptions", "organizations_url": "https://api.github.com/users/madhavajay/orgs", "repos_url": "https://api.github.com/users/madhavajay/repos", "events_url": "https://api.github.com/users/madhavajay/events{/privacy}", "received_events_url": "https://api.github.com/users/madhavajay/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24907). All of your documentation changes will be reflected on that endpoint." ]
1,689
1,689
1,689
CONTRIBUTOR
null
- This results in cpu mode on Apple Silicon mps # What does this PR do? Fixes a bug which prevents device("mps") working if `ACCELERATE_USE_CPU` is set to anything including "False". <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24907/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24907/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24907", "html_url": "https://github.com/huggingface/transformers/pull/24907", "diff_url": "https://github.com/huggingface/transformers/pull/24907.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24907.patch", "merged_at": 1689766202000 }
https://api.github.com/repos/huggingface/transformers/issues/24906
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24906/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24906/comments
https://api.github.com/repos/huggingface/transformers/issues/24906/events
https://github.com/huggingface/transformers/pull/24906
1,811,279,959
PR_kwDOCUB6oc5V20bU
24,906
[`Llama2`] replace `self.pretraining_tp` with `self.config.pretraining_tp`
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "I'm curious, if we just set `pretraining_tp` to 1 in `config.json`, does it mean we disable it? Because when I see the model, there is some `if-else condition` checking the value of `config.pretraining_tp > 1`.", "@fahadh4ilyas the variable is stored as a private attribute so even if you modify the config it won't work sadly, hence the PR\r\n\r\nEDT: yes if you manually modify the config file locally on the Hub or locally it will work yes, but this might break existing inference setups (if we modify the official repo)", "> @fahadh4ilyas the variable is stored as a private attribute so even if you modify the config it won't work sadly, hence the PR\r\n> \r\n> EDT: yes if you manually modify the config file locally on the Hub or locally it will work yes, but this might break existing inference setups (if we modify the official repo)\r\n\r\nWell at least if you want to fine-tune model based on llama 2, you could just change the config and use that config value right?", "yes this is definitely possible and correct, but in terms of UX, I personally would prefer to call a single line `model.disable_tp()` rather than manually going over the config, change it and use that version instead. For end-users and people that are not familiar with TP that want to run training of Llama2 out of the box (for example using PEFT library as linked in the PEFT PR) might get very confused by looking at the error, and this PR combined with the PEFT PR solves the issue. Let's see what arthur and sylvain will say, happy to close the PR if they think it is not relevant !", "> yes this is definitely possible and correct, but in terms of UX, I personally would prefer to call a single line `model.disable_tp()` rather than manually going over the config, change it and use that version instead. For end-users and people that are not familiar with TP that want to run training of Llama2 out of the box (for example using PEFT library as linked in the PEFT PR) might get very confused by looking at the error, and this PR combined with the PEFT PR solves the issue. Let's see what arthur and sylvain will say, happy to close the PR if they think it is not relevant !\r\n\r\nThat's fair enough. I actually also didn't understand at first what is `pretraining_tp` until I saw this pull request. Changing config manually might be possible for people who understand the architecture. But, for people who just want to use the model wont bother changing it.\r\n\r\nI'm wondering does having `pretaining_tp` make any difference? If the use case is for parallelism, what I saw in the script is that the script only for-looping each part of weight. I don't see any parallel method there." ]
1,689
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? Llama-2 (and also in the past Bloom) has introduced a new attribute in the config file `pretraining_tp` to mimic the behaviour of the original model at inference. Therefore, inside some layers the TP paradigm is "reproduced" by manually simulating the TP paradigm, see for example: https://github.com/huggingface/transformers/blob/476be08c4aa96f8c1cae4200d2677bbe8f12cf80/src/transformers/models/llama/modeling_llama.py#L291 ![Screenshot from 2022-04-22 15-55-48](https://user-images.githubusercontent.com/49240599/164728838-3f78585b-1018-4366-a499-95133fdeaa89.png) In fact this can lead to unexpected behaviour to users, especially for peft library: (related: https://github.com/huggingface/peft/issues/726) (currently it is not possible to finetune llama-2 models with `pretraining_tp > 1`) ```bash File "/home/younes_huggingface_co/code/transformers/src/transformers/models/llama/modeling_llama.py", line 209, in forward gate_proj = torch.cat([F.linear(x, gate_proj_slices[i]) for i in range(self.pretraining_tp)], dim=-1) File "/home/younes_huggingface_co/code/transformers/src/transformers/models/llama/modeling_llama.py", line 209, in <listcomp> gate_proj = torch.cat([F.linear(x, gate_proj_slices[i]) for i in range(self.pretraining_tp)], dim=-1) RuntimeError: mat1 and mat2 shapes cannot be multiplied (2048x5120 and 1x6912) ``` I would argue that these slight numerical differences are ok to have in the context of training, thus we should let users the possibility to disable this behaviour, at their own risk. This PR fixes this and proposes to add a new method `disable_pretraining_tp` to disable that behavior. I also added a method to revert the TP behavior in case users want to revert that after training. On par with: https://github.com/huggingface/peft/pull/728 cc @ArthurZucker @sgugger @pacman100 @BenjaminBossan
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24906/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/24906/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24906", "html_url": "https://github.com/huggingface/transformers/pull/24906", "diff_url": "https://github.com/huggingface/transformers/pull/24906.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24906.patch", "merged_at": 1689769587000 }
https://api.github.com/repos/huggingface/transformers/issues/24905
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24905/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24905/comments
https://api.github.com/repos/huggingface/transformers/issues/24905/events
https://github.com/huggingface/transformers/issues/24905
1,811,256,327
I_kwDOCUB6oc5r9ZQH
24,905
RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)`
{ "login": "HyDRA08", "id": 40757292, "node_id": "MDQ6VXNlcjQwNzU3Mjky", "avatar_url": "https://avatars.githubusercontent.com/u/40757292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HyDRA08", "html_url": "https://github.com/HyDRA08", "followers_url": "https://api.github.com/users/HyDRA08/followers", "following_url": "https://api.github.com/users/HyDRA08/following{/other_user}", "gists_url": "https://api.github.com/users/HyDRA08/gists{/gist_id}", "starred_url": "https://api.github.com/users/HyDRA08/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HyDRA08/subscriptions", "organizations_url": "https://api.github.com/users/HyDRA08/orgs", "repos_url": "https://api.github.com/users/HyDRA08/repos", "events_url": "https://api.github.com/users/HyDRA08/events{/privacy}", "received_events_url": "https://api.github.com/users/HyDRA08/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This usually means there is something wrong with your setup install.", "> \r\n\r\nLike version mismatch?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,689
1,693
1,693
NONE
null
### System Info torch version: 1.12.0+cu113 CUDA: 11.4 `RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling cublasCreate(handle)` I am getting this error when trying to run using CUDA. Works fine when running in CPU. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction CODE: ``` import os import pandas as pd import time from transformers import T5Tokenizer, T5ForConditionalGeneration import torch tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-xl") model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xl", low_cpu_mem_usage=True).to("cuda:0") def generate(input_text): input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda:0") output = model.generate(input_ids, max_length=512) return tokenizer.decode(output[0], skip_special_tokens=True) input_text = 'Something .... Sonethinggg......' response = generate(input_text) print(input_text) ``` ### Expected behavior Output
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24905/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24905/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24904
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24904/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24904/comments
https://api.github.com/repos/huggingface/transformers/issues/24904/events
https://github.com/huggingface/transformers/pull/24904
1,811,241,975
PR_kwDOCUB6oc5V2sGn
24,904
๐ŸŒ [i18n-KO] Translated `tf_xla.md` to Korean
{ "login": "54data", "id": 99173116, "node_id": "U_kgDOBelC_A", "avatar_url": "https://avatars.githubusercontent.com/u/99173116?v=4", "gravatar_id": "", "url": "https://api.github.com/users/54data", "html_url": "https://github.com/54data", "followers_url": "https://api.github.com/users/54data/followers", "following_url": "https://api.github.com/users/54data/following{/other_user}", "gists_url": "https://api.github.com/users/54data/gists{/gist_id}", "starred_url": "https://api.github.com/users/54data/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/54data/subscriptions", "organizations_url": "https://api.github.com/users/54data/orgs", "repos_url": "https://api.github.com/users/54data/repos", "events_url": "https://api.github.com/users/54data/events{/privacy}", "received_events_url": "https://api.github.com/users/54data/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "๋ฆฌ๋ทฐ ์‚ฌํ•ญ ์—†์Šต๋‹ˆ๋‹ค!", "๊ผผ๊ผผํ•œ ๋ฒˆ์—ญ ์ž˜ ๋ณด์•˜์Šต๋‹ˆ๋‹ค! ์ˆ˜๊ณ  ๋งŽ์œผ์…จ์Šต๋‹ˆ๋‹ค!" ]
1,689
1,690
1,690
CONTRIBUTOR
null
<!-- PR์˜ ์ œ๋ชฉ์€ "๐ŸŒ [i18n-KO] Translated `<your_file>.md` to Korean" ์œผ๋กœ ๋ถ€ํƒ๋“œ๋ฆฝ๋‹ˆ๋‹ค! --> # What does this PR do? Translated the `tf_xla.md` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 ## Before reviewing - [x] Check for missing / redundant translations (๋ฒˆ์—ญ ๋ˆ„๋ฝ/์ค‘๋ณต ๊ฒ€์‚ฌ) - [x] Grammar Check (๋งž์ถค๋ฒ• ๊ฒ€์‚ฌ) - [x] Review or Add new terms to glossary (์šฉ์–ด ํ™•์ธ ๋ฐ ์ถ”๊ฐ€) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview๋กœ ์ •์ƒ์ž‘๋™ ํ™•์ธ) ## Who can review? (Initial) <!-- 1. ์œ„ ์ฒดํฌ๊ฐ€ ๋ชจ๋‘ ์™„๋ฃŒ๋œ ๋’ค์—, ์ด ์•„๋ž˜์— ๋ฆฌ๋ทฐ๋ฅผ ์š”์ฒญํ•  ํŒ€์›๋“ค์„ ๋ฉ˜์…˜ํ•ด์ฃผ์„ธ์š”! --> <!-- May you please review this PR? @member1 @member2 ... --> @kihoon71, @0525hhgus, @54data, @Sunmin0520, @seank021, @augustinLib ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. ํŒ€์›๋“ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ€ ๋๋‚œ ํ›„์—๋งŒ ํ—ˆ๊น…ํŽ˜์ด์Šค ์ง์›๋“ค์—๊ฒŒ ๋ฆฌ๋ทฐ ์š”์ฒญํ•˜๋Š” ์•„๋ž˜ ์ฃผ์„์„ ๋…ธ์ถœํ•ด์ฃผ์„ธ์š”! --> May you please review this PR? @sgugger, @ArthurZucker, @eunseojo
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24904/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24904/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24904", "html_url": "https://github.com/huggingface/transformers/pull/24904", "diff_url": "https://github.com/huggingface/transformers/pull/24904.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24904.patch", "merged_at": 1690285402000 }
https://api.github.com/repos/huggingface/transformers/issues/24903
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24903/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24903/comments
https://api.github.com/repos/huggingface/transformers/issues/24903/events
https://github.com/huggingface/transformers/issues/24903
1,811,232,211
I_kwDOCUB6oc5r9TXT
24,903
Xformers is not installed correctly.
{ "login": "david-waterworth", "id": 5028974, "node_id": "MDQ6VXNlcjUwMjg5NzQ=", "avatar_url": "https://avatars.githubusercontent.com/u/5028974?v=4", "gravatar_id": "", "url": "https://api.github.com/users/david-waterworth", "html_url": "https://github.com/david-waterworth", "followers_url": "https://api.github.com/users/david-waterworth/followers", "following_url": "https://api.github.com/users/david-waterworth/following{/other_user}", "gists_url": "https://api.github.com/users/david-waterworth/gists{/gist_id}", "starred_url": "https://api.github.com/users/david-waterworth/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/david-waterworth/subscriptions", "organizations_url": "https://api.github.com/users/david-waterworth/orgs", "repos_url": "https://api.github.com/users/david-waterworth/repos", "events_url": "https://api.github.com/users/david-waterworth/events{/privacy}", "received_events_url": "https://api.github.com/users/david-waterworth/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It looks like the `pipeline` is back to importing every model (this message comes from trying to access an unrelated model). I'll have a look later this week. You can ignore that warning in the meantime, it's irrelevant.", "Should be fixed by the PR linked above.", "same issue for me on this basic example:\r\n\r\n```\r\nimport argparse\r\nfrom transformers import pipeline\r\n\r\n# Create the parser\r\nparser = argparse.ArgumentParser(description=\"Perform sentiment analysis\")\r\n\r\n# Add an argument\r\nparser.add_argument('Text', type=str, help=\"the text to analyze\")\r\n\r\n# Parse the argument\r\nargs = parser.parse_args()\r\n\r\n# Load the classifier\r\nclassifier = pipeline(\"sentiment-analysis\", model=\"distilbert-base-uncased-finetuned-sst-2-english\")\r\n\r\n# Perform sentiment analysis\r\nres = classifier(args.Text)\r\n\r\n# Print the result\r\nprint(res)\r\n```\r\n\r\nReinstalled transformers, using v.4.31.0", "The fix is not in v4.31.0, you will need to use a source install." ]
1,689
1,692
1,689
NONE
null
### System Info - `transformers` version: 4.30.2 - Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` python from transformers import pipeline pipe = pipeline("text-classification", model="roberta-base", device=0) ``` Edit: I know this model isn't trained for the "text-classification" task, I get the same problem with a private model I fine tuned. Results in the message ``` ... Xformers is not installed correctly. If you want to use memory_efficient_attention to accelerate training use the following command to install Xformers pip install xformers. ``` But I'm using torch==2.0.1 and [memory-efficient-attention](https://huggingface.co/docs/diffusers/optimization/fp16#memory-efficient-attention ) states "If you have PyTorch 2.0 installed, you shouldnโ€™t use xFormers!" The message is confusing - I have torch 2.0 installed and pipeline is for inference. This message doesn't occur if I use `AutoModelForSequenceClassification.from_pretrained` ### Expected behavior The documentation or the warning message are inconsistent.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24903/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24903/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24902
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24902/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24902/comments
https://api.github.com/repos/huggingface/transformers/issues/24902/events
https://github.com/huggingface/transformers/pull/24902
1,811,167,268
PR_kwDOCUB6oc5V2b-J
24,902
fix typo in BARK_PRETRAINED_MODEL_ARCHIVE_LIST
{ "login": "21jun", "id": 29483429, "node_id": "MDQ6VXNlcjI5NDgzNDI5", "avatar_url": "https://avatars.githubusercontent.com/u/29483429?v=4", "gravatar_id": "", "url": "https://api.github.com/users/21jun", "html_url": "https://github.com/21jun", "followers_url": "https://api.github.com/users/21jun/followers", "following_url": "https://api.github.com/users/21jun/following{/other_user}", "gists_url": "https://api.github.com/users/21jun/gists{/gist_id}", "starred_url": "https://api.github.com/users/21jun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/21jun/subscriptions", "organizations_url": "https://api.github.com/users/21jun/orgs", "repos_url": "https://api.github.com/users/21jun/repos", "events_url": "https://api.github.com/users/21jun/events{/privacy}", "received_events_url": "https://api.github.com/users/21jun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> fix typo in BARK_PRETRAINED_MODEL_ARCHIVE_LIST suno/barh should be suno/bark ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24902/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24902/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24902", "html_url": "https://github.com/huggingface/transformers/pull/24902", "diff_url": "https://github.com/huggingface/transformers/pull/24902.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24902.patch", "merged_at": 1689766504000 }
https://api.github.com/repos/huggingface/transformers/issues/24900
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24900/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24900/comments
https://api.github.com/repos/huggingface/transformers/issues/24900/events
https://github.com/huggingface/transformers/pull/24900
1,811,011,565
PR_kwDOCUB6oc5V16jY
24,900
๐ŸŒ [i18n-KO] Translated `testing.md` to Korean
{ "login": "Sunmin0520", "id": 60782131, "node_id": "MDQ6VXNlcjYwNzgyMTMx", "avatar_url": "https://avatars.githubusercontent.com/u/60782131?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Sunmin0520", "html_url": "https://github.com/Sunmin0520", "followers_url": "https://api.github.com/users/Sunmin0520/followers", "following_url": "https://api.github.com/users/Sunmin0520/following{/other_user}", "gists_url": "https://api.github.com/users/Sunmin0520/gists{/gist_id}", "starred_url": "https://api.github.com/users/Sunmin0520/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Sunmin0520/subscriptions", "organizations_url": "https://api.github.com/users/Sunmin0520/orgs", "repos_url": "https://api.github.com/users/Sunmin0520/repos", "events_url": "https://api.github.com/users/Sunmin0520/events{/privacy}", "received_events_url": "https://api.github.com/users/Sunmin0520/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "์œ„์— ํ˜„์„œ ๋ฉ˜ํ† ๋‹˜๊ป˜์„œ ์˜ฌ๋ ค์ฃผ์‹  ๊ฒƒ ์™ธ์—๋Š” ๋ฆฌ๋ทฐ ์‚ฌํ•ญ ์—†์Šต๋‹ˆ๋‹ค!" ]
1,689
1,690
1,690
CONTRIBUTOR
null
# What does this PR do? Translated the `testing.md` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 ## Before reviewing - [x] Check for missing / redundant translations (๋ฒˆ์—ญ ๋ˆ„๋ฝ/์ค‘๋ณต ๊ฒ€์‚ฌ) - [x] Grammar Check (๋งž์ถค๋ฒ• ๊ฒ€์‚ฌ) - [x] Review or Add new terms to glossary (์šฉ์–ด ํ™•์ธ ๋ฐ ์ถ”๊ฐ€) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview๋กœ ์ •์ƒ์ž‘๋™ ํ™•์ธ) ## Who can review? (Initial) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) @kihoon71, @0525hhgus, @54data, @seank021, @augustinLib ## Who can review? (Final) May you please review this PR? @sgugger, @ArthurZucker, @eunseojo
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24900/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24900/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24900", "html_url": "https://github.com/huggingface/transformers/pull/24900", "diff_url": "https://github.com/huggingface/transformers/pull/24900.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24900.patch", "merged_at": 1690205052000 }
https://api.github.com/repos/huggingface/transformers/issues/24899
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24899/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24899/comments
https://api.github.com/repos/huggingface/transformers/issues/24899/events
https://github.com/huggingface/transformers/issues/24899
1,810,996,642
I_kwDOCUB6oc5r8Z2i
24,899
LLAMA 2 HF tokenizer len is 32001
{ "login": "ari9dam", "id": 14134882, "node_id": "MDQ6VXNlcjE0MTM0ODgy", "avatar_url": "https://avatars.githubusercontent.com/u/14134882?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ari9dam", "html_url": "https://github.com/ari9dam", "followers_url": "https://api.github.com/users/ari9dam/followers", "following_url": "https://api.github.com/users/ari9dam/following{/other_user}", "gists_url": "https://api.github.com/users/ari9dam/gists{/gist_id}", "starred_url": "https://api.github.com/users/ari9dam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ari9dam/subscriptions", "organizations_url": "https://api.github.com/users/ari9dam/orgs", "repos_url": "https://api.github.com/users/ari9dam/repos", "events_url": "https://api.github.com/users/ari9dam/events{/privacy}", "received_events_url": "https://api.github.com/users/ari9dam/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The actual config says that `pad_token_id=0` - so I assume this is correct?\r\n\r\nWhat is interesting is that id `32000` maps to a token `'<pad>'` while the original vocab does not contain this token:\r\nhttps://huggingface.co/meta-llama/Llama-2-7b-hf/raw/main/tokenizer.json\r\n\r\nIt seems this is being added somewhere in HF code?", "cc @ArthurZucker ", "Hey! Yes this is not entirely expected, we update the slow tokenizer, but the fast version did not get the update. I'll open PRs to fix this! ", "@ArthurZucker jfyi I see the same behavior for slow and fast on 4.31.0\r\n\r\nedit: correction, indeed it is not set in the tokenizer", "Is there any plan to fix it?", "It is fixed on all model! ", "@ArthurZucker Does the fix really resolve this? I pip installed `transformers` from Github, and there is still a mismatch.\r\n\r\n```python\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\r\n\r\n# Using fast tokenizer by default\r\ntokenizer = AutoTokenizer.from_pretrained(\"meta-llama/Llama-2-7b-chat-hf\")\r\nmodel = AutoModelForCausalLM.from_pretrained(\"meta-llama/Llama-2-7b-chat-hf\")\r\nprint(len(tokenizer)) # 32001\r\nprint(model.config.vocab_size) # 32000\r\nprint(tokenizer.get_added_vocab()) # {'<pad>': 32000}\r\n```\r\n\r\nTransformers version: `4.32.0.dev0`", "Will check asap! Might be the fast tokenizer that did not get updated.", "Sorry but no, I cannot reproduce your issue. I do not know if you maybe did not update cached files but I have this: \r\n```python \r\n>>> from transformers import AutoTokenizer\r\n>>> tokenizer = AutoTokenizer.from_pretrained(\"meta-llama/Llama-2-7b-chat-hf\")\r\n>>> len(tokenizer)\r\n32000\r\n>>> print(tokenizer.get_added_vocab())\r\n{}\r\n```", "Thanks for confirming. You're right, it's because of the locally cached files. Downloading them from huggingface and re-running the code now works fine." ]
1,689
1,690
1,689
NONE
null
### System Info Installed from source (4.32.0.dev0) ``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-13b-hf") model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-13b-hf") print(len(tokenizer)) #32001 print(model.config.vocab_size) #32000 ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-13b-hf") model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-13b-hf") print(len(tokenizer)) #32001 print(model.config.vocab_size) #32000 ``` ### Expected behavior model vocab size and tokenizer len should be 32000. It seems the padding token of the tokenizer is set to '\<unk\>'. Which is not normally the case. It's normally not set.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24899/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24899/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24898
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24898/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24898/comments
https://api.github.com/repos/huggingface/transformers/issues/24898/events
https://github.com/huggingface/transformers/issues/24898
1,810,949,796
I_kwDOCUB6oc5r8Oak
24,898
NLLB MoE router_state referenced before assignment
{ "login": "drunkcoding", "id": 14305648, "node_id": "MDQ6VXNlcjE0MzA1NjQ4", "avatar_url": "https://avatars.githubusercontent.com/u/14305648?v=4", "gravatar_id": "", "url": "https://api.github.com/users/drunkcoding", "html_url": "https://github.com/drunkcoding", "followers_url": "https://api.github.com/users/drunkcoding/followers", "following_url": "https://api.github.com/users/drunkcoding/following{/other_user}", "gists_url": "https://api.github.com/users/drunkcoding/gists{/gist_id}", "starred_url": "https://api.github.com/users/drunkcoding/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/drunkcoding/subscriptions", "organizations_url": "https://api.github.com/users/drunkcoding/orgs", "repos_url": "https://api.github.com/users/drunkcoding/repos", "events_url": "https://api.github.com/users/drunkcoding/events{/privacy}", "received_events_url": "https://api.github.com/users/drunkcoding/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false } ]
[ "cc @ArthurZucker ", "Hey! Thanks for reporting! I remember working on a bug where NLLB-MoE was not being torch compiled because None values were returned. Will push a fix! \r\nGlad to see that Nllb-MoE is being used ๐Ÿค— " ]
1,689
1,692
1,692
NONE
null
### System Info - `transformers` version: 4.29.2 - Platform: Linux-5.15.0-69-generic-x86_64-with-glibc2.17 - Python version: 3.8.17 - Huggingface_hub version: 0.15.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker @youn ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-moe-54b") model( input_ids=input_ids, attention_mask=attenstion_mask, decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, output_router_logits=True, return_dict=True, ) ``` ```bash transformers/models/nllb_moe/modeling_nllb_moe.py", line 720, in forward outputs += (router_states,) UnboundLocalError: local variable 'router_states' referenced before assignment ``` ### Expected behavior return encoder_router_logits and decoder_router_logits rather than error. The error happens on the dense layers where no router_state is returned.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24898/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24898/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24897
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24897/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24897/comments
https://api.github.com/repos/huggingface/transformers/issues/24897/events
https://github.com/huggingface/transformers/issues/24897
1,810,870,516
I_kwDOCUB6oc5r77D0
24,897
Is attention_mask supposed to be added to attention_weights? Based on functions docstring mask values are either 0 or 1.
{ "login": "rpanackal", "id": 36329474, "node_id": "MDQ6VXNlcjM2MzI5NDc0", "avatar_url": "https://avatars.githubusercontent.com/u/36329474?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rpanackal", "html_url": "https://github.com/rpanackal", "followers_url": "https://api.github.com/users/rpanackal/followers", "following_url": "https://api.github.com/users/rpanackal/following{/other_user}", "gists_url": "https://api.github.com/users/rpanackal/gists{/gist_id}", "starred_url": "https://api.github.com/users/rpanackal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rpanackal/subscriptions", "organizations_url": "https://api.github.com/users/rpanackal/orgs", "repos_url": "https://api.github.com/users/rpanackal/repos", "events_url": "https://api.github.com/users/rpanackal/events{/privacy}", "received_events_url": "https://api.github.com/users/rpanackal/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @kashif ", "thanks @sgugger and @rpanackal let me check", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hey @rpanackal, the attention masks are first processed with:\r\n```python\r\ndef _expand_mask(mask: torch.Tensor, dtype: torch.dtype, tgt_len: Optional[int] = None):\r\n \"\"\"\r\n Expands attention_mask from `[bsz, seq_len]` to `[bsz, 1, tgt_seq_len, src_seq_len]`.\r\n \"\"\"\r\n bsz, src_len = mask.size()\r\n tgt_len = tgt_len if tgt_len is not None else src_len\r\n\r\n expanded_mask = mask[:, None, None, :].expand(bsz, 1, tgt_len, src_len).to(dtype)\r\n\r\n inverted_mask = 1.0 - expanded_mask\r\n\r\n return inverted_mask.masked_fill(inverted_mask.to(torch.bool), torch.finfo(dtype).min)\r\n```\r\nso the value are no longer 0 and 1 ๐Ÿ˜‰ " ]
1,689
1,693
1,693
NONE
null
https://github.com/huggingface/transformers/blame/476be08c4aa96f8c1cae4200d2677bbe8f12cf80/src/transformers/models/autoformer/modeling_autoformer.py#L619
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24897/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24897/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24896
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24896/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24896/comments
https://api.github.com/repos/huggingface/transformers/issues/24896/events
https://github.com/huggingface/transformers/pull/24896
1,810,848,257
PR_kwDOCUB6oc5V1XN_
24,896
๐ŸŒ [i18n-KO] Translated `perf_train_tpu_tf.md` to Korean
{ "login": "0525hhgus", "id": 47289574, "node_id": "MDQ6VXNlcjQ3Mjg5NTc0", "avatar_url": "https://avatars.githubusercontent.com/u/47289574?v=4", "gravatar_id": "", "url": "https://api.github.com/users/0525hhgus", "html_url": "https://github.com/0525hhgus", "followers_url": "https://api.github.com/users/0525hhgus/followers", "following_url": "https://api.github.com/users/0525hhgus/following{/other_user}", "gists_url": "https://api.github.com/users/0525hhgus/gists{/gist_id}", "starred_url": "https://api.github.com/users/0525hhgus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/0525hhgus/subscriptions", "organizations_url": "https://api.github.com/users/0525hhgus/orgs", "repos_url": "https://api.github.com/users/0525hhgus/repos", "events_url": "https://api.github.com/users/0525hhgus/events{/privacy}", "received_events_url": "https://api.github.com/users/0525hhgus/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "I closed the PR because the document was not generated correctly." ]
1,689
1,691
1,691
CONTRIBUTOR
null
<!-- PR์˜ ์ œ๋ชฉ์€ "๐ŸŒ [i18n-KO] Translated `<your_file>.md` to Korean" ์œผ๋กœ ๋ถ€ํƒ๋“œ๋ฆฝ๋‹ˆ๋‹ค --> # What does this PR do? Translated the `perf_train_tpu_tf.md` file of the documentation to Korean. Thank you in advance for your review! Part of https://github.com/huggingface/transformers/issues/20179 <!-- ๋ฉ”์ธ ์ด์Šˆ์— ๊ธฐ๋ก์ด ๋‚จ์•„์š”! ๊ฐ€์งœ์—ฐ๊ตฌ์†Œ ๋ฆฌํฌ๋ฅผ ์‚ฌ์šฉํ•ด ์—ฐ์Šตํ•˜์‹ค๋•Œ๋Š” ์ œ๊ฑฐํ•ด์ฃผ์‹œ๋ฉด ๊ฐ์‚ฌํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค! :smile: --> ## Before reviewing - [x] Check for missing / redundant translations (๋ฒˆ์—ญ ๋ˆ„๋ฝ/์ค‘๋ณต ๊ฒ€์‚ฌ) - [x] Grammar Check (๋งž์ถค๋ฒ• ๊ฒ€์‚ฌ) - [x] Review or Add new terms to glossary (์šฉ์–ด ํ™•์ธ ๋ฐ ์ถ”๊ฐ€) - [ ] Check Inline TOC (e.g. `[[lowercased-header]]`) - [ ] Check live-preview for gotchas (live-preview๋กœ ์ •์ƒ์ž‘๋™ ํ™•์ธ) ## Who can review? (Initial) <!-- 1. ์œ„ ์ฒดํฌ๊ฐ€ ๋ชจ๋‘ ์™„๋ฃŒ๋œ ๋’ค์—๋งŒ ๊ฐ€์งœ์—ฐ๊ตฌ์†Œ ํŒ€์›๋“ค์—๊ฒŒ ๋ฆฌ๋ทฐ ์š”์ฒญํ•˜๋Š” ์•„๋ž˜ ์ฃผ์„์„ ๋…ธ์ถœํ•ด์ฃผ์„ธ์š”! --> <!-- Team PseudoLab, may you please review this PR? --> @kihoon71, @0525hhgus, @54data, @Sunmin0520, @seank021, @augustinLib ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. ๊ฐ€์งœ์—ฐ๊ตฌ์†Œ ํŒ€์›๋“ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ€ ๋๋‚œ ํ›„์—๋งŒ ํ—ˆ๊น…ํŽ˜์ด์Šค ์ง์›๋“ค์—๊ฒŒ ๋ฆฌ๋ทฐ ์š”์ฒญํ•˜๋Š” ์•„๋ž˜ ์ฃผ์„์„ ๋…ธ์ถœํ•ด์ฃผ์„ธ์š”! --> <!-- May you please review this PR? --> <!-- @sgugger, @ArthurZucker, @eunseojo -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24896/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24896/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24896", "html_url": "https://github.com/huggingface/transformers/pull/24896", "diff_url": "https://github.com/huggingface/transformers/pull/24896.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24896.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24895
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24895/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24895/comments
https://api.github.com/repos/huggingface/transformers/issues/24895/events
https://github.com/huggingface/transformers/pull/24895
1,810,780,275
PR_kwDOCUB6oc5V1IIO
24,895
Update tested versions in READMEs
{ "login": "EliahKagan", "id": 1771172, "node_id": "MDQ6VXNlcjE3NzExNzI=", "avatar_url": "https://avatars.githubusercontent.com/u/1771172?v=4", "gravatar_id": "", "url": "https://api.github.com/users/EliahKagan", "html_url": "https://github.com/EliahKagan", "followers_url": "https://api.github.com/users/EliahKagan/followers", "following_url": "https://api.github.com/users/EliahKagan/following{/other_user}", "gists_url": "https://api.github.com/users/EliahKagan/gists{/gist_id}", "starred_url": "https://api.github.com/users/EliahKagan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/EliahKagan/subscriptions", "organizations_url": "https://api.github.com/users/EliahKagan/orgs", "repos_url": "https://api.github.com/users/EliahKagan/repos", "events_url": "https://api.github.com/users/EliahKagan/events{/privacy}", "received_events_url": "https://api.github.com/users/EliahKagan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@sgugger Thanks for reviewing! I've made it non-draft so it can be merged, as requested.\r\n\r\nBefore I saw your comment, I noticed that the listed TensorFlow versions were also older than the minimum `transformers` currently supports and added a commit to deal with that. I'm not sure if that newest commit was part of what you had looked at. If you'd like me to remove that commit, or otherwise make a change related to it, please let me know!", "_The documentation is not available anymore as the PR was closed or merged._", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24895). All of your documentation changes will be reflected on that endpoint." ]
1,689
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? This updates the top-level readme files say the project has been tested with Python 3.8+ (instead of 3.7+) and PyTorch 1.10+ (instead of 1.9+). Versions prior to those are no longer supported as of [v4.31.0](https://github.com/huggingface/transformers/releases/tag/v4.31.0). It also updates some other versions in those lists, that had become out of date earlier. The non-English readme files were less up to date than the English readme file `README.md`. I allowed my editor to remove trailing whitespace, since it does not appear to have been intentional. Rendered Markdown does not appear changed. After editing all seven READMEs, running `make fix-copies` (required to pass CI) propagated this whitespace removal to (just) one of the `index.md` files, which is why that is also changed. However, I understand if whitespace removal may be viewed as best done separately; if requested, I'd be pleased to modify this PR to retain the trailing whitespace. ## Rationale My reasoning is similar as the reasoning that was given for the previous update to these versions in #24307. As of [**v4.31.0**](https://github.com/huggingface/transformers/releases/tag/v4.31.0), ๐Ÿค— Transformers has dropped support for Python 3.7 (#24091) and for PyTorch 1.9 (#24080). Because of that: - If the version ranges noted in the readme files are not changed, some users are likely to be mislead into expecting new releases of ๐Ÿค— Transformers to support those versions. - A lesser issue is that the claim that this repository is tested on all those versions will gradually become inaccurate as further contributions are made to the repository, now that those versions are not supported. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. <!-- I'm sure who, if anyone, I should ping for this. I may edit in a ping later. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24895/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24895/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24895", "html_url": "https://github.com/huggingface/transformers/pull/24895", "diff_url": "https://github.com/huggingface/transformers/pull/24895.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24895.patch", "merged_at": 1689765455000 }
https://api.github.com/repos/huggingface/transformers/issues/24894
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24894/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24894/comments
https://api.github.com/repos/huggingface/transformers/issues/24894/events
https://github.com/huggingface/transformers/pull/24894
1,810,765,370
PR_kwDOCUB6oc5V1EzS
24,894
add a configuration option in llama architecture
{ "login": "likenneth", "id": 40636646, "node_id": "MDQ6VXNlcjQwNjM2NjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/40636646?v=4", "gravatar_id": "", "url": "https://api.github.com/users/likenneth", "html_url": "https://github.com/likenneth", "followers_url": "https://api.github.com/users/likenneth/followers", "following_url": "https://api.github.com/users/likenneth/following{/other_user}", "gists_url": "https://api.github.com/users/likenneth/gists{/gist_id}", "starred_url": "https://api.github.com/users/likenneth/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/likenneth/subscriptions", "organizations_url": "https://api.github.com/users/likenneth/orgs", "repos_url": "https://api.github.com/users/likenneth/repos", "events_url": "https://api.github.com/users/likenneth/events{/privacy}", "received_events_url": "https://api.github.com/users/likenneth/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for your PR! I think this is more suited to our code on the Hub API as it doesn't apply to the base LLaMA checkpoints. cc @ArthurZucker if you have a different opinion.", "Hi @sgugger, \r\n\r\nAppreciate your prompt reply. However I think I missed something. What do you mean by it doesn't apply to the base LLaMA? \r\n\r\nI can run these code without problems;\r\n\r\n```\r\nimport llama\r\nmodel_name = 'decapoda-research/llama-7b-hf'\r\ntokenizer = llama.LLaMATokenizer.from_pretrained(model_name)\r\nmodel = llama.LLaMAForCausalLM.from_pretrained(model_name, low_cpu_mem_usage = True, torch_dtype=torch.float16)\r\n```\r\n\r\nAnd what is the Hub API? My understanding was, any model on HF Hub has to be an instantiation of the transformers library, then since honest llama requires a bias term and this cannot be achieved by existing flexibility of the configuration, I made this PR that flex up the HF LLaMA model. \r\n\r\nThanks!", "You are requesting to make some changes to a model to accommodate your custom versions of it, the original LLaMA checkpoints do not need this config flag. So that's why this should be done via the [code on the Hub API](https://huggingface.co/docs/transformers/custom_models) which allows you to share your code along with the model weights on the Hub.", "I see, let me have a look at the link @sgugger shared to see how to upload customized model to HF!", "Thanks for your help! I baked the inference-time intervention into a LLaMA-2-7B. The process is done offline and the edited model can work independently, as fast as the original LLaMA-2. Link: https://huggingface.co/likenneth/honest_llama2_chat_7B" ]
1,689
1,692
1,692
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> I add a configuration option in llama to switch on the bias in output projection in MHA, which is critical for incorporating a new line of alignment research called [inference-time intervention](https://arxiv.org/pdf/2306.03341.pdf) into huggingface hub. Basically the inference-time-intervention found specific vectors to be added into the residual stream of the forwarding process that can significantly boost the truthfulness of the LLaMA family, including Alpaca and Vicuna. However, it requires a slight architecture change to the original architecture, the bias term needs to be activated in the output projection in the MHA. By merging this PR, I can push [honest llama](https://github.com/likenneth/honest_llama) onto huggingface hub and entertain all huggingface users with a more truthful LLaMA model, and its friends, honest Alpaca and honest Vicuna. This PR contains only 3 lines of new code and is completely backward-compatible. Fixes # (issue) N/A ## Before submitting - :x: This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - :white_check_mark: Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - :x: Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - :white_check_mark: Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - :x: Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ArthurZucker and @younesbelkada <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24894/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24894/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24894", "html_url": "https://github.com/huggingface/transformers/pull/24894", "diff_url": "https://github.com/huggingface/transformers/pull/24894.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24894.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24893
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24893/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24893/comments
https://api.github.com/repos/huggingface/transformers/issues/24893/events
https://github.com/huggingface/transformers/pull/24893
1,810,654,531
PR_kwDOCUB6oc5V0sQn
24,893
Avoid some pipeline tasks to use `use_cache=True`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "LGTM! Thanks for diving into this", "Thanks for resolving this so quickly @ydshieh!" ]
1,689
1,689
1,689
COLLABORATOR
null
# What does this PR do? Fix #24873 For example, we can pass `use_cache=True` (or set this in config) to `BartForSequenceClassification` and it will return `past_key_values` (although not useful to the task). This is to respond to #24873, despite the memory issue reported there is not 100% for sure yet. However, avoid using `cache` and not to let model returning `past_key_values` can avoid overhead like CPU/GPU communication (huge/many tensors). In the code snippet I provided in #24873, there is a gain of 16.5% of running time. We could probably extend this PR to other pipeline task classes.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24893/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24893/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24893", "html_url": "https://github.com/huggingface/transformers/pull/24893", "diff_url": "https://github.com/huggingface/transformers/pull/24893.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24893.patch", "merged_at": 1689752992000 }
https://api.github.com/repos/huggingface/transformers/issues/24892
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24892/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24892/comments
https://api.github.com/repos/huggingface/transformers/issues/24892/events
https://github.com/huggingface/transformers/pull/24892
1,810,383,752
PR_kwDOCUB6oc5Vz2xl
24,892
Add descriptive docstring to TemperatureLogitsWarper
{ "login": "nablabits", "id": 33068707, "node_id": "MDQ6VXNlcjMzMDY4NzA3", "avatar_url": "https://avatars.githubusercontent.com/u/33068707?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nablabits", "html_url": "https://github.com/nablabits", "followers_url": "https://api.github.com/users/nablabits/followers", "following_url": "https://api.github.com/users/nablabits/following{/other_user}", "gists_url": "https://api.github.com/users/nablabits/gists{/gist_id}", "starred_url": "https://api.github.com/users/nablabits/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nablabits/subscriptions", "organizations_url": "https://api.github.com/users/nablabits/orgs", "repos_url": "https://api.github.com/users/nablabits/repos", "events_url": "https://api.github.com/users/nablabits/events{/privacy}", "received_events_url": "https://api.github.com/users/nablabits/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Please note that I force pushed the branch as I had to fix the linter (and took the advantage to sync my fork).\r\n\r\nIn case someone else runs into the same problem (and by some planetary alignment :ringed_planet: :earth_africa: :new_moon: lands in this tiny comment), it took me a while to figure out the issue as `make quality` was failing without fixing anything, and `make style` was fixing the wrong thing. This is what it was failing:\r\n```python\r\n>>> some_python_code = 1\r\nSome output of above prompt\r\n\r\nThis kind of text shouldn't be here\r\n\r\n>>> some_other_python_code = 2\r\n```", "_The documentation is not available anymore as the PR was closed or merged._", "> That is a great example @nablabits! :fire: \r\n> \r\n> Thank you for iterating on it, I've requested a few minor changes (to further improve information density), and it should be ready to merge after they are addressed :hugs: \r\n\r\nHi @gante, thanks for your patience, guidance and support, much appreciated :hugs: . I greatly enjoyed the learning experience. Are you happy for me to pick something else in the list (I'd like to deepen in my knowledge of this tiny bit of the platform) or the protocol suggests that I should leave remaining tasks for other folks?\r\n", "@nablabits feel free to pick more tasks from the list, as many as you want (one at a time, of course) -- as long as you confirm that no one is working on a given task and that you share on the issue that you've decided to take it ๐Ÿค— " ]
1,689
1,690
1,690
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes one of the cases in https://github.com/huggingface/transformers/issues/24783 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? @gante ## Notes My first PR to this library, greatly appreciate your patience :wink: <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24892/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24892/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24892", "html_url": "https://github.com/huggingface/transformers/pull/24892", "diff_url": "https://github.com/huggingface/transformers/pull/24892.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24892.patch", "merged_at": 1690376306000 }
https://api.github.com/repos/huggingface/transformers/issues/24891
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24891/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24891/comments
https://api.github.com/repos/huggingface/transformers/issues/24891/events
https://github.com/huggingface/transformers/pull/24891
1,810,267,222
PR_kwDOCUB6oc5VzddK
24,891
[`Llama2`] Add support for Llama 2
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24891). All of your documentation changes will be reflected on that endpoint.", "Somehow this commit will easily lead the llama model to be overflow in fp16 during training. Hope someone can take a look :)", "@fe1ixxu you can have a look at #25065 for more details on this!" ]
1,689
1,692
1,689
COLLABORATOR
null
# What does this PR do? Add Support for Llama2 ! ๐Ÿ”ฅ ๐Ÿ”ฅ
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24891/reactions", "total_count": 40, "+1": 1, "-1": 0, "laugh": 0, "hooray": 4, "confused": 0, "heart": 6, "rocket": 29, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24891/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24891", "html_url": "https://github.com/huggingface/transformers/pull/24891", "diff_url": "https://github.com/huggingface/transformers/pull/24891.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24891.patch", "merged_at": 1689707912000 }
https://api.github.com/repos/huggingface/transformers/issues/24890
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24890/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24890/comments
https://api.github.com/repos/huggingface/transformers/issues/24890/events
https://github.com/huggingface/transformers/pull/24890
1,810,249,187
PR_kwDOCUB6oc5VzZdX
24,890
Check for accelerate env var when doing CPU only
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? Checks if `ACCELERATE_USE_CPU` is enabled to ensure trainer parts get set properly if so (and if `--use_cpu` isn't used) Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24890/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24890/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24890", "html_url": "https://github.com/huggingface/transformers/pull/24890", "diff_url": "https://github.com/huggingface/transformers/pull/24890.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24890.patch", "merged_at": 1689720037000 }
https://api.github.com/repos/huggingface/transformers/issues/24889
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24889/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24889/comments
https://api.github.com/repos/huggingface/transformers/issues/24889/events
https://github.com/huggingface/transformers/pull/24889
1,810,183,394
PR_kwDOCUB6oc5VzK8e
24,889
[`Blip`] Fix blip output name
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hi thanks for this @younesbelkada โค๏ธ \r\n\r\nHowever, this is not enough, you have to add something like \r\n\r\nhttps://github.com/huggingface/transformers/blob/3ec10e6c76362191b61260300fe1d6173a8dd7e1/src/transformers/models/swin/modeling_swin.py#L171-L177\r\n\r\nin that PR to make backward compatibility.\r\n\r\n(unless @sgugger say this model is still recent and/or not high usage)", "Thanks @ydshieh for double checking, I missed that, it should be now added! :D ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24889). All of your documentation changes will be reflected on that endpoint." ]
1,689
1,689
1,689
CONTRIBUTOR
null
As suggested by @ydshieh offline and similarly as https://github.com/huggingface/transformers/pull/22893 In fact, there is no reason to call the output logits `decoder_logits`, as they always come from the decoder cc @sgugger @ydshieh
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24889/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24889/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24889", "html_url": "https://github.com/huggingface/transformers/pull/24889", "diff_url": "https://github.com/huggingface/transformers/pull/24889.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24889.patch", "merged_at": 1689701428000 }
https://api.github.com/repos/huggingface/transformers/issues/24888
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24888/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24888/comments
https://api.github.com/repos/huggingface/transformers/issues/24888/events
https://github.com/huggingface/transformers/pull/24888
1,810,101,278
PR_kwDOCUB6oc5Vy4xM
24,888
[`InstructBlip`] Fix int8/fp4 issues
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? FIxes: https://github.com/huggingface/transformers/issues/24884 To reproduce: ```python from transformers import InstructBlipProcessor, InstructBlipForConditionalGeneration import torch from PIL import Image import requests device = "cuda" if torch.cuda.is_available() else "cpu" MODEL_NAME = "Salesforce/instructblip-flan-t5-xl" # Note: Here we no longer specify `torch.bfloat16`. model = InstructBlipForConditionalGeneration.from_pretrained(MODEL_NAME, device_map={"":0}, load_in_4bit=True) processor = InstructBlipProcessor.from_pretrained(MODEL_NAME) url = "https://raw.githubusercontent.com/salesforce/LAVIS/main/docs/_static/Confusing-Pictures.jpg" image = Image.open(requests.get(url, stream=True).raw).convert("RGB") prompt = "What is unusual about this image?" # Note: Here we no longer specify `torch.bfloat16`, but we use `torch.float16` as shown in the test code for Salesforce/instructlblup-vicuna-7b inputs = processor(images=image, text=prompt, return_tensors="pt").to(device, torch.float16) outputs = model.generate( **inputs, do_sample=False, num_beams=5, max_length=256, min_length=1, top_p=0.9, repetition_penalty=1.5, length_penalty=1.0, temperature=1, ) generated_text = processor.batch_decode(outputs, skip_special_tokens=True)[0].strip() print(generated_text) ``` Strangely I couldn't reproduce the issue with vicuna models but managed to reproduce with flan-t5 models. Also it is very strange that users never reported the same issue with Blip2 cc @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24888/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24888/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24888", "html_url": "https://github.com/huggingface/transformers/pull/24888", "diff_url": "https://github.com/huggingface/transformers/pull/24888.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24888.patch", "merged_at": 1689701076000 }
https://api.github.com/repos/huggingface/transformers/issues/24887
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24887/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24887/comments
https://api.github.com/repos/huggingface/transformers/issues/24887/events
https://github.com/huggingface/transformers/pull/24887
1,810,023,396
PR_kwDOCUB6oc5Vyn2C
24,887
๐ŸŒ [i18n-KO] Translated `perf_train_tpu_tf.md` to Korean
{ "login": "0525hhgus", "id": 47289574, "node_id": "MDQ6VXNlcjQ3Mjg5NTc0", "avatar_url": "https://avatars.githubusercontent.com/u/47289574?v=4", "gravatar_id": "", "url": "https://api.github.com/users/0525hhgus", "html_url": "https://github.com/0525hhgus", "followers_url": "https://api.github.com/users/0525hhgus/followers", "following_url": "https://api.github.com/users/0525hhgus/following{/other_user}", "gists_url": "https://api.github.com/users/0525hhgus/gists{/gist_id}", "starred_url": "https://api.github.com/users/0525hhgus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/0525hhgus/subscriptions", "organizations_url": "https://api.github.com/users/0525hhgus/orgs", "repos_url": "https://api.github.com/users/0525hhgus/repos", "events_url": "https://api.github.com/users/0525hhgus/events{/privacy}", "received_events_url": "https://api.github.com/users/0525hhgus/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
CONTRIBUTOR
null
<!-- PR์˜ ์ œ๋ชฉ์€ "๐ŸŒ [i18n-KO] Translated `<your_file>.md` to Korean" ์œผ๋กœ ๋ถ€ํƒ๋“œ๋ฆฝ๋‹ˆ๋‹ค --> # What does this PR do? Translated the `perf_train_tpu_tf.md` file of the documentation to Korean ๐Ÿ˜„ Thank you in advance for your review! Part of https://github.com/huggingface/transformers/issues/20179 <!-- ๋ฉ”์ธ ์ด์Šˆ์— ๊ธฐ๋ก์ด ๋‚จ์•„์š”! ๊ฐ€์งœ์—ฐ๊ตฌ์†Œ ๋ฆฌํฌ๋ฅผ ์‚ฌ์šฉํ•ด ์—ฐ์Šตํ•˜์‹ค๋•Œ๋Š” ์ œ๊ฑฐํ•ด์ฃผ์‹œ๋ฉด ๊ฐ์‚ฌํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค! :smile: --> ## Before reviewing - [x] Check for missing / redundant translations (๋ฒˆ์—ญ ๋ˆ„๋ฝ/์ค‘๋ณต ๊ฒ€์‚ฌ) - [x] Grammar Check (๋งž์ถค๋ฒ• ๊ฒ€์‚ฌ) - [x] Review or Add new terms to glossary (์šฉ์–ด ํ™•์ธ ๋ฐ ์ถ”๊ฐ€) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview๋กœ ์ •์ƒ์ž‘๋™ ํ™•์ธ) ## Who can review? (Initial) <!-- 1. ์œ„ ์ฒดํฌ๊ฐ€ ๋ชจ๋‘ ์™„๋ฃŒ๋œ ๋’ค์—๋งŒ ๊ฐ€์งœ์—ฐ๊ตฌ์†Œ ํŒ€์›๋“ค์—๊ฒŒ ๋ฆฌ๋ทฐ ์š”์ฒญํ•˜๋Š” ์•„๋ž˜ ์ฃผ์„์„ ๋…ธ์ถœํ•ด์ฃผ์„ธ์š”! --> <!-- Team PseudoLab, may you please review this PR? --> @kihoon71, @0525hhgus, @54data, @Sunmin0520, @seank021, @augustinLib ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. ๊ฐ€์งœ์—ฐ๊ตฌ์†Œ ํŒ€์›๋“ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ€ ๋๋‚œ ํ›„์—๋งŒ ํ—ˆ๊น…ํŽ˜์ด์Šค ์ง์›๋“ค์—๊ฒŒ ๋ฆฌ๋ทฐ ์š”์ฒญํ•˜๋Š” ์•„๋ž˜ ์ฃผ์„์„ ๋…ธ์ถœํ•ด์ฃผ์„ธ์š”! --> <!-- May you please review this PR? --> <!-- @sgugger, @ArthurZucker, @eunseojo -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24887/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24887/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24887", "html_url": "https://github.com/huggingface/transformers/pull/24887", "diff_url": "https://github.com/huggingface/transformers/pull/24887.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24887.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24886
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24886/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24886/comments
https://api.github.com/repos/huggingface/transformers/issues/24886/events
https://github.com/huggingface/transformers/pull/24886
1,809,982,984
PR_kwDOCUB6oc5Vye__
24,886
Separate CircleCI cache between `main` and `pull` or other branches
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
COLLABORATOR
null
# What does this PR do? Keep the CircleCI cache running on `main` branch not affected by other branches/PR. Reading the following by keeping in mind: - @lhoestq created a branch yesterday using `datasets 2.13.2.dev0` - @sgugger changed `setup.py` yesterday in the commit `4.32.0.dev0` - the cache used by @lhoestq 's branch is used on `main` in/after @sgugger 's commit. Also: the CI triggered on `main` is much less frequently, so keeping its own cache from the pull events is fine (in terms of the cost) Assume - someone changes `setup.py` to use a dev. version of a library, say `datasets` in a PR or a HF non-main branch - CI triggered + [precise] cache not found + [partial] cache found + cache updated with `datasets` dev. version - shortly, another person change `setup.py` (not necessary the same library involved) in another PR/branch and being merged - CI triggered+ [precise] cache not found + [partial] cache found: - this could be the above one (depending the time gap) - `datasets` version remains `dev` if it has `datasets>=XXX` in `setup.py` in the merged PR - (as `dev` is newer version, so requirement already satisfied) - we get failures on `main` due to the `datasets dev` version, which should be avoided.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24886/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24886/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24886", "html_url": "https://github.com/huggingface/transformers/pull/24886", "diff_url": "https://github.com/huggingface/transformers/pull/24886.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24886.patch", "merged_at": 1689707127000 }
https://api.github.com/repos/huggingface/transformers/issues/24885
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24885/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24885/comments
https://api.github.com/repos/huggingface/transformers/issues/24885/events
https://github.com/huggingface/transformers/pull/24885
1,809,971,855
PR_kwDOCUB6oc5VycxT
24,885
Disable ipex env var if false
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? Properly sets the Accelerate env variable if apex is set to False (the default in training args) Fixes # (issue) Solves https://github.com/huggingface/transformers/issues/24871 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24885/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24885/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24885", "html_url": "https://github.com/huggingface/transformers/pull/24885", "diff_url": "https://github.com/huggingface/transformers/pull/24885.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24885.patch", "merged_at": 1689710822000 }
https://api.github.com/repos/huggingface/transformers/issues/24884
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24884/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24884/comments
https://api.github.com/repos/huggingface/transformers/issues/24884/events
https://github.com/huggingface/transformers/issues/24884
1,809,952,779
I_kwDOCUB6oc5r4bAL
24,884
InstructBLIP - FlanT5-XL model Int4/8 quantization broken
{ "login": "lukealexmiller", "id": 1459243, "node_id": "MDQ6VXNlcjE0NTkyNDM=", "avatar_url": "https://avatars.githubusercontent.com/u/1459243?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lukealexmiller", "html_url": "https://github.com/lukealexmiller", "followers_url": "https://api.github.com/users/lukealexmiller/followers", "following_url": "https://api.github.com/users/lukealexmiller/following{/other_user}", "gists_url": "https://api.github.com/users/lukealexmiller/gists{/gist_id}", "starred_url": "https://api.github.com/users/lukealexmiller/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lukealexmiller/subscriptions", "organizations_url": "https://api.github.com/users/lukealexmiller/orgs", "repos_url": "https://api.github.com/users/lukealexmiller/repos", "events_url": "https://api.github.com/users/lukealexmiller/events{/privacy}", "received_events_url": "https://api.github.com/users/lukealexmiller/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[ { "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false } ]
[ "Hi @lukealexmiller \r\nThanks for reporting, will look into it ASAP. ", "Hi @lukealexmiller \r\nAgain, thanks for reporting, I made a patch to support 8bit / 4bit correctly for Flan-t5 models in https://github.com/huggingface/transformers/pull/24888 , before it gets merged you can download it with the following:\r\n\r\n```bash\r\npip install git+https://github.com/younesbelkada/transformers.git@fix-instructblip\r\n```", "Hi @lukealexmiller, isn't that expected to fail given that you load the model in 8 bit, not providing any `dtype`, and cast the inputs to `torch.float16`? I personally also provide the `torch_dtype` argument to the `from_pretrained` method, which works:\r\n```\r\nfrom PIL import Image\r\nimport requests\r\nfrom transformers import InstructBlipProcessor, InstructBlipForConditionalGeneration\r\nimport torch\r\n\r\nprocessor = InstructBlipProcessor.from_pretrained(\"Salesforce/instructblip-flan-t5-xl\")\r\nmodel = InstructBlipForConditionalGeneration.from_pretrained(\r\n \"Salesforce/instructblip-flan-t5-xl\", load_in_8bit=True, device_map=\"auto\", torch_dtype=torch.bfloat16\r\n)\r\n\r\nurl = \"http://images.cocodataset.org/val2017/000000039769.jpg\"\r\nimage = Image.open(requests.get(url, stream=True).raw)\r\nprompt = \"How many cats are there?\"\r\ninputs = processor(images=image, text=prompt, return_tensors=\"pt\").to(device=\"cuda\", dtype=torch.bfloat16)\r\n\r\ngenerated_ids = model.generate(**inputs)\r\ngenerated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0].strip()\r\nprint(generated_text)\r\n```\r\nI'm also casting to `bfloat16` rather than `float16` to match the original implementation.\r\n\r\nMaybe @younesbelkada knows whether or not this should work without providing the `torch_dtype` argument.", "Thanks for the prompt responses and PR @younesbelkada & @NielsRogge.\r\n\r\n@NielsRogge that does make sense, but if I load the model `from_pretrained` and also specify `torch_dtype`, my notebook kernel dies. I'm running on A10G w/24GB RAM, and as this works without `load_in_8bit=True`, I don't believe this is an OOM error. Although I don't have more detailed error info yet.\r\nAny thoughts? ", "@lukealexmiller the fix should be now on the main branch, feel free to re-open if the issue persists! ", "@younesbelkada using `Resolved https://github.com/huggingface/transformers.git to commit 07360b6c9c9448d619a82798419ed291dfc6ac8f` I am still unable to load the model and successfully call generate using `torch.float16`. I see the same error as before. \r\nHowever, it seems likely that I should be specifying `torch.bfloat16` as proposed by @NielsRogge.\r\n\r\nI don't have access to more than 24GB at the moment and don't see why the `int8` quantization would counterintuitively need more than `bfloat16` and cause OOM, but @NielsRogge can you confirm the GPU you're using and the memory footprint during/after model loading?\r\n\r\nCan one of you re-open the issue as the PR doesn't appear to have solved the problem, but it appears that there is another problem that causes the notebook kernel to crash? Thanks", "@younesbelkada / @NielsRogge I am unable to re-open this issue, are either of you able to do that and do you have any ideas on the problem I'm still seeing?", "Hi @lukealexmiller \r\nI can confirm this script:\r\n\r\n```python\r\nfrom transformers import InstructBlipProcessor, InstructBlipForConditionalGeneration\r\nimport torch\r\nfrom PIL import Image\r\nimport requests\r\n\r\ndevice = \"cuda\" if torch.cuda.is_available() else \"cpu\"\r\n\r\nMODEL_NAME = \"Salesforce/instructblip-flan-t5-xl\"\r\n# Note: Here we no longer specify `torch.bfloat16`.\r\nmodel = InstructBlipForConditionalGeneration.from_pretrained(MODEL_NAME, device_map={\"\":0}, load_in_4bit=True)\r\n\r\nprocessor = InstructBlipProcessor.from_pretrained(MODEL_NAME)\r\n\r\nurl = \"https://raw.githubusercontent.com/salesforce/LAVIS/main/docs/_static/Confusing-Pictures.jpg\"\r\nimage = Image.open(requests.get(url, stream=True).raw).convert(\"RGB\")\r\nprompt = \"What is unusual about this image?\"\r\n\r\n# Note: Here we no longer specify `torch.bfloat16`, but we use `torch.float16` as shown in the test code for Salesforce/instructlblup-vicuna-7b\r\ninputs = processor(images=image, text=prompt, return_tensors=\"pt\").to(device, torch.float16)\r\n\r\noutputs = model.generate(\r\n **inputs,\r\n do_sample=False,\r\n num_beams=5,\r\n max_length=256,\r\n min_length=1,\r\n top_p=0.9,\r\n repetition_penalty=1.5,\r\n length_penalty=1.0,\r\n temperature=1,\r\n)\r\n\r\ngenerated_text = processor.batch_decode(outputs, skip_special_tokens=True)[0].strip()\r\nprint(generated_text)\r\n```\r\nStill works on my end, can you try to uninstall transformers and re-install it from source:\r\n\r\n```bash\r\npip uninstall transformers\r\npip install git+https://github.com/huggingface/transformers.git\r\n```", "Hi @lukealexmiller the code snippet above is also working for me on an A100 GPU with 80 GB of RAM. \r\n\r\nI'm currently unable to access the GPU, but if I am I could report GPU memory usage. Normally it's the same as the model size in case you're using int4 quantization (so for Salesforce/instructblip-flan-t5-xl that's around 8GB of GPU RAM).", "Update; I can confirm 9299MiB / 81920MiB of the A100 is being used. Hence around 9 GB, which is in line with int4 quantization (as much memory as you have parameters, i.e. 9 billion parameter model = 9 billion bytes = 9 GB of GPU memory)." ]
1,689
1,690
1,689
NONE
null
### System Info - `transformers` version: 4.32.0.dev0 - Platform: Linux-4.14.314-238.539.amzn2.x86_64-x86_64-with-glibc2.31 - Python version: 3.10.9 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.22.0.dev0 - Accelerate config: - compute_environment: LOCAL_MACHINE - distributed_type: MULTI_GPU - mixed_precision: no - use_cpu: False - num_processes: 4 - machine_rank: 0 - num_machines: 1 - gpu_ids: all - rdzv_backend: static - same_network: True - main_training_function: main - downcast_bf16: no - tpu_use_cluster: False - tpu_use_sudo: False - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @ArthurZucker @younesbelkada @NielsRogge ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ## Problem Specifying `load_in_8bit` or `load_in_4bit` for `Salesforce/instructblip-flan-t5-xl`, I am able to load the model into GPU memory, but calling generate results in an error. ## Steps to Reproduce: ### torch.bfloat16 Working Version: 1. Load model into memory ``` from transformers import InstructBlipProcessor, InstructBlipForConditionalGeneration import torch from PIL import Image import requests device = "cuda" if torch.cuda.is_available() else "cpu" MODEL_NAME = "Salesforce/instructblip-flan-t5-xl" # load in bfloat16 - this is type t5 models were pretrained using (see https://github.com/salesforce/LAVIS/issues/418) model = InstructBlipForConditionalGeneration.from_pretrained(MODEL_NAME, device_map="auto", torch_dtype=torch.bfloat16) processor = InstructBlipProcessor.from_pretrained(MODEL_NAME) ``` 2. Run example VQA ``` url = "https://raw.githubusercontent.com/salesforce/LAVIS/main/docs/_static/Confusing-Pictures.jpg" image = Image.open(requests.get(url, stream=True).raw).convert("RGB") prompt = "What is unusual about this image?" # Cast to torch.bfloat16, otherwise we get an error. inputs = processor(images=image, text=prompt, return_tensors="pt").to(device, torch.bfloat16) outputs = model.generate( **inputs, do_sample=False, num_beams=5, max_length=256, min_length=1, top_p=0.9, repetition_penalty=1.5, length_penalty=1.0, temperature=1, ) generated_text = processor.batch_decode(outputs, skip_special_tokens=True)[0].strip() print(generated_text) ``` 3. Observe generated text: `The image depicts a man ironing clothes on the back of a yellow van in the middle of a busy city street. The unusual aspect of the image is that the man is not wearing a shirt, which may indicate that he is a homeless person or an immigrant. In addition, there are several other vehicles in the background, including taxis, buses, and motorcycles.` ### `load_in_8bit` Failing Version: 1. Load model into memory ``` from transformers import InstructBlipProcessor, InstructBlipForConditionalGeneration import torch from PIL import Image import requests device = "cuda" if torch.cuda.is_available() else "cpu" MODEL_NAME = "Salesforce/instructblip-flan-t5-xl" # Note: Here we no longer specify `torch.bfloat16`. model = InstructBlipForConditionalGeneration.from_pretrained(MODEL_NAME, device_map="auto", load_in_8bit=True) processor = InstructBlipProcessor.from_pretrained(MODEL_NAME) ``` 2. Run example VQA. Note we use the same input type as in [the test code](https://github.com/younesbelkada/transformers/blob/dc9dba7824a949b2a1f89e1f4537da9c8e25dd10/tests/models/instructblip/test_modeling_instructblip.py#L533). ``` url = "https://raw.githubusercontent.com/salesforce/LAVIS/main/docs/_static/Confusing-Pictures.jpg" image = Image.open(requests.get(url, stream=True).raw).convert("RGB") prompt = "What is unusual about this image?" # Note: Here we no longer specify `torch.bfloat16`, but we use `torch.float16` as shown in the test code for Salesforce/instructlblup-vicuna-7b inputs = processor(images=image, text=prompt, return_tensors="pt").to(device, torch.float16) outputs = model.generate( **inputs, do_sample=False, num_beams=5, max_length=256, min_length=1, top_p=0.9, repetition_penalty=1.5, length_penalty=1.0, temperature=1, ) generated_text = processor.batch_decode(outputs, skip_special_tokens=True)[0].strip() print(generated_text) ``` 3. Observe error ``` RuntimeError Traceback (most recent call last) Cell In[4], line 14 11 if torch.is_floating_point(v): 12 inputs[k] = v.to(torch.float16) ---> 14 outputs = model.generate( 15 **inputs, 16 do_sample=False, 17 num_beams=5, 18 max_length=256, 19 min_length=1, 20 top_p=0.9, 21 repetition_penalty=1.5, 22 length_penalty=1.0, 23 temperature=1, 24 ) 25 generated_text = processor.batch_decode(outputs, skip_special_tokens=True)[0].strip() 26 print(generated_text) File /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:27, in _DecoratorContextManager.__call__.<locals>.decorate_context(*args, **kwargs) 24 @functools.wraps(func) 25 def decorate_context(*args, **kwargs): 26 with self.clone(): ---> 27 return func(*args, **kwargs) File /usr/lib/python3/dist-packages/transformers/models/instructblip/modeling_instructblip.py:1522, in InstructBlipForConditionalGeneration.generate(self, pixel_values, qformer_input_ids, qformer_attention_mask, input_ids, attention_mask, **generate_kwargs) 1520 qformer_attention_mask = torch.ones_like(qformer_input_ids) 1521 qformer_attention_mask = torch.cat([query_attention_mask, qformer_attention_mask], dim=1) -> 1522 query_outputs = self.qformer( 1523 input_ids=qformer_input_ids, 1524 attention_mask=qformer_attention_mask, 1525 query_embeds=query_tokens, 1526 encoder_hidden_states=image_embeds, 1527 encoder_attention_mask=image_attention_mask, 1528 return_dict=True, 1529 ) 1530 query_output = query_outputs.last_hidden_state[:, : query_tokens.size(1), :] 1532 language_model_inputs = self.language_projection(query_output) File /usr/lib/python3/dist-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs) 1190 # If we don't have any hooks, we want to skip the rest of the logic in 1191 # this function, and just call forward. 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] File /usr/lib/python3/dist-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs) 163 output = old_forward(*args, **kwargs) 164 else: --> 165 output = old_forward(*args, **kwargs) 166 return module._hf_hook.post_forward(module, output) File /usr/lib/python3/dist-packages/transformers/models/instructblip/modeling_instructblip.py:1169, in InstructBlipQFormerModel.forward(self, input_ids, attention_mask, position_ids, query_embeds, head_mask, encoder_hidden_states, encoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict) 1163 past_key_values_length = ( 1164 past_key_values[0][0].shape[2] - self.config.query_length if past_key_values is not None else 0 1165 ) 1167 query_length = query_embeds.shape[1] if query_embeds is not None else 0 -> 1169 embedding_output = self.embeddings( 1170 input_ids=input_ids, 1171 position_ids=position_ids, 1172 query_embeds=query_embeds, 1173 past_key_values_length=past_key_values_length, 1174 ) 1176 input_shape = embedding_output.size()[:-1] 1177 batch_size, seq_length = input_shape File /usr/lib/python3/dist-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs) 1190 # If we don't have any hooks, we want to skip the rest of the logic in 1191 # this function, and just call forward. 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] File /usr/lib/python3/dist-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs) 163 output = old_forward(*args, **kwargs) 164 else: --> 165 output = old_forward(*args, **kwargs) 166 return module._hf_hook.post_forward(module, output) File /usr/lib/python3/dist-packages/transformers/models/instructblip/modeling_instructblip.py:1041, in InstructBlipQFormerEmbeddings.forward(self, input_ids, position_ids, query_embeds, past_key_values_length) 1038 else: 1039 embeddings = query_embeds -> 1041 embeddings = self.layernorm(embeddings) 1042 embeddings = self.dropout(embeddings) 1043 return embeddings File /usr/lib/python3/dist-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs) 1190 # If we don't have any hooks, we want to skip the rest of the logic in 1191 # this function, and just call forward. 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] File /usr/lib/python3/dist-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs) 163 output = old_forward(*args, **kwargs) 164 else: --> 165 output = old_forward(*args, **kwargs) 166 return module._hf_hook.post_forward(module, output) File /usr/lib/python3/dist-packages/torch/nn/modules/normalization.py:190, in LayerNorm.forward(self, input) 189 def forward(self, input: Tensor) -> Tensor: --> 190 return F.layer_norm( 191 input, self.normalized_shape, self.weight, self.bias, self.eps) File /usr/lib/python3/dist-packages/torch/nn/functional.py:2515, in layer_norm(input, normalized_shape, weight, bias, eps) 2511 if has_torch_function_variadic(input, weight, bias): 2512 return handle_torch_function( 2513 layer_norm, (input, weight, bias), input, normalized_shape, weight=weight, bias=bias, eps=eps 2514 ) -> 2515 return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled) RuntimeError: expected scalar type Float but found Half ``` I am unable to get `load_in_8bit` or `load_in_4bit` to work, both return these errors. I have also tried changing the dtype casting when putting the input processing to the GPU, but observe different errors. ### Expected behavior Expect quantization to work, as it does when using `Salesforce/instructblip-vicuna-7b` model. I am able to use quantized `google/flan-t5-xl` text generation model with the same setup, and have run `pip uninstall apex` as described in https://github.com/huggingface/transformers/issues/21391
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24884/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24884/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24883
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24883/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24883/comments
https://api.github.com/repos/huggingface/transformers/issues/24883/events
https://github.com/huggingface/transformers/pull/24883
1,809,932,404
PR_kwDOCUB6oc5VyUIx
24,883
๐ŸŒ[i18n-KO] Translated performance.md to Korean
{ "login": "augustinLib", "id": 74291999, "node_id": "MDQ6VXNlcjc0MjkxOTk5", "avatar_url": "https://avatars.githubusercontent.com/u/74291999?v=4", "gravatar_id": "", "url": "https://api.github.com/users/augustinLib", "html_url": "https://github.com/augustinLib", "followers_url": "https://api.github.com/users/augustinLib/followers", "following_url": "https://api.github.com/users/augustinLib/following{/other_user}", "gists_url": "https://api.github.com/users/augustinLib/gists{/gist_id}", "starred_url": "https://api.github.com/users/augustinLib/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/augustinLib/subscriptions", "organizations_url": "https://api.github.com/users/augustinLib/orgs", "repos_url": "https://api.github.com/users/augustinLib/repos", "events_url": "https://api.github.com/users/augustinLib/events{/privacy}", "received_events_url": "https://api.github.com/users/augustinLib/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "๋ฆฌ๋ทฐ ์‚ฌํ•ญ ์—†์Šต๋‹ˆ๋‹ค!" ]
1,689
1,690
1,690
CONTRIBUTOR
null
<!-- PR์˜ ์ œ๋ชฉ์€ "๐ŸŒ [i18n-KO] Translated `<your_file>.md` to Korean" ์œผ๋กœ ๋ถ€ํƒ๋“œ๋ฆฝ๋‹ˆ๋‹ค! --> # What does this PR do? Translated the `performance.md` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 ## Before reviewing - [x] Check for missing / redundant translations (๋ฒˆ์—ญ ๋ˆ„๋ฝ/์ค‘๋ณต ๊ฒ€์‚ฌ) - [x] Grammar Check (๋งž์ถค๋ฒ• ๊ฒ€์‚ฌ) - [x] Review or Add new terms to glossary (์šฉ์–ด ํ™•์ธ ๋ฐ ์ถ”๊ฐ€) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview๋กœ ์ •์ƒ์ž‘๋™ ํ™•์ธ) ## Who can review? (Initial) <!-- 1. ์œ„ ์ฒดํฌ๊ฐ€ ๋ชจ๋‘ ์™„๋ฃŒ๋œ ๋’ค์—, ์ด ์•„๋ž˜์— ๋ฆฌ๋ทฐ๋ฅผ ์š”์ฒญํ•  ํŒ€์›๋“ค์„ ๋ฉ˜์…˜ํ•ด์ฃผ์„ธ์š”! --> May you please review this PR? @0525hhgus, @Sunmin0520, @54data, @seank021, @kihoon71 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. ํŒ€์›๋“ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ€ ๋๋‚œ ํ›„์—๋งŒ ํ—ˆ๊น…ํŽ˜์ด์Šค ์ง์›๋“ค์—๊ฒŒ ๋ฆฌ๋ทฐ ์š”์ฒญํ•˜๋Š” ์•„๋ž˜ ์ฃผ์„์„ ๋…ธ์ถœํ•ด์ฃผ์„ธ์š”! --> May you please review this PR? @sgugger, @ArthurZucker, @eunseojo
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24883/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24883/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24883", "html_url": "https://github.com/huggingface/transformers/pull/24883", "diff_url": "https://github.com/huggingface/transformers/pull/24883.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24883.patch", "merged_at": 1690205014000 }
https://api.github.com/repos/huggingface/transformers/issues/24882
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24882/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24882/comments
https://api.github.com/repos/huggingface/transformers/issues/24882/events
https://github.com/huggingface/transformers/pull/24882
1,809,857,825
PR_kwDOCUB6oc5VyDkU
24,882
Enable `ZeroShotAudioClassificationPipelineTests::test_small_model_pt`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
COLLABORATOR
null
# What does this PR do? This test was failing due to a dev version of `datasets` (created in another branch) being used in `main`. It's now resolved. Thank you @lhoestq for the investigation.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24882/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24882/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24882", "html_url": "https://github.com/huggingface/transformers/pull/24882", "diff_url": "https://github.com/huggingface/transformers/pull/24882.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24882.patch", "merged_at": 1689685733000 }
https://api.github.com/repos/huggingface/transformers/issues/24881
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24881/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24881/comments
https://api.github.com/repos/huggingface/transformers/issues/24881/events
https://github.com/huggingface/transformers/pull/24881
1,809,787,939
PR_kwDOCUB6oc5Vx0TO
24,881
๐ŸŒย [i18n-KO] Translatedย `transformers_agents.md` to Korean
{ "login": "sim-so", "id": 96299403, "node_id": "U_kgDOBb1piw", "avatar_url": "https://avatars.githubusercontent.com/u/96299403?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sim-so", "html_url": "https://github.com/sim-so", "followers_url": "https://api.github.com/users/sim-so/followers", "following_url": "https://api.github.com/users/sim-so/following{/other_user}", "gists_url": "https://api.github.com/users/sim-so/gists{/gist_id}", "starred_url": "https://api.github.com/users/sim-so/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sim-so/subscriptions", "organizations_url": "https://api.github.com/users/sim-so/orgs", "repos_url": "https://api.github.com/users/sim-so/repos", "events_url": "https://api.github.com/users/sim-so/events{/privacy}", "received_events_url": "https://api.github.com/users/sim-so/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@eenzeenee @sronger๋‹˜ ๋ฆฌ๋ทฐ ๊ฐ์‚ฌํ•ฉ๋‹ˆ๋‹ค! \r\n์ œ์•ˆํ•ด์ฃผ์‹  ์ˆ˜์ •์‚ฌํ•ญ์„ ๋ชจ๋‘ ๋ฐ˜์˜ํ–ˆ์Šต๋‹ˆ๋‹ค โ˜บ๏ธ", "Could you review this PR? ๐Ÿ˜ƒ\r\n@sgugger, @ArthurZucker, @eunseojo" ]
1,689
1,690
1,690
CONTRIBUTOR
null
# What does this PR do? Translated the `transformers_agents.md` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 ## Before reviewing - [x] Check for missing / redundant translations (๋ฒˆ์—ญ ๋ˆ„๋ฝ/์ค‘๋ณต ๊ฒ€์‚ฌ) - [x] Grammar Check (๋งž์ถค๋ฒ• ๊ฒ€์‚ฌ) - [x] Review or Add new terms to glossary (์šฉ์–ด ํ™•์ธ ๋ฐ ์ถ”๊ฐ€) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview๋กœ ์ •์ƒ์ž‘๋™ ํ™•์ธ) ## Who can review? (Initial) May you please review this PR? @sronger, @TaeYupNoh, @kj021, @HanNayeoniee, @eenzeenee, @sim-so ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) @sgugger, @ArthurZucker, @eunseojo May you please review this PR?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24881/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24881/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24881", "html_url": "https://github.com/huggingface/transformers/pull/24881", "diff_url": "https://github.com/huggingface/transformers/pull/24881.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24881.patch", "merged_at": 1690563997000 }
https://api.github.com/repos/huggingface/transformers/issues/24880
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24880/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24880/comments
https://api.github.com/repos/huggingface/transformers/issues/24880/events
https://github.com/huggingface/transformers/pull/24880
1,809,681,307
PR_kwDOCUB6oc5Vxcxt
24,880
Fix CircleCI cache
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
COLLABORATOR
null
# What does this PR do? Fix the cache of site-package is loaded in the step of pip cache loading. See comment in the change.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24880/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24880/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24880", "html_url": "https://github.com/huggingface/transformers/pull/24880", "diff_url": "https://github.com/huggingface/transformers/pull/24880.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24880.patch", "merged_at": 1689680701000 }
https://api.github.com/repos/huggingface/transformers/issues/24879
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24879/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24879/comments
https://api.github.com/repos/huggingface/transformers/issues/24879/events
https://github.com/huggingface/transformers/pull/24879
1,809,673,635
PR_kwDOCUB6oc5VxbFN
24,879
add ascend npu accelerator support
{ "login": "statelesshz", "id": 28150734, "node_id": "MDQ6VXNlcjI4MTUwNzM0", "avatar_url": "https://avatars.githubusercontent.com/u/28150734?v=4", "gravatar_id": "", "url": "https://api.github.com/users/statelesshz", "html_url": "https://github.com/statelesshz", "followers_url": "https://api.github.com/users/statelesshz/followers", "following_url": "https://api.github.com/users/statelesshz/following{/other_user}", "gists_url": "https://api.github.com/users/statelesshz/gists{/gist_id}", "starred_url": "https://api.github.com/users/statelesshz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/statelesshz/subscriptions", "organizations_url": "https://api.github.com/users/statelesshz/orgs", "repos_url": "https://api.github.com/users/statelesshz/repos", "events_url": "https://api.github.com/users/statelesshz/events{/privacy}", "received_events_url": "https://api.github.com/users/statelesshz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,694
1,689
CONTRIBUTOR
null
### What does this PR do? Currently, Accelerate has supported ascend npu([see](https://github.com/huggingface/accelerate/pull/1676)). This PR enables users to leverage the ascend npu for training and inference of ๐Ÿค— Transformers models. For example, you can run the official glue text-classification task using ascend npu with below command: ```bash export TASK_NAME=sst2 time python -m torch.distributed.run --nproc_per_node 8 run_glue.py \ --model_name_or_path bert-base-cased \ --task_name $TASK_NAME \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3 \ --output_dir ./output ``` Below are the output logs: ```text WARNING:__main__: ***************************************** Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. ***************************************** 07/18/2023 22:10:00 - WARNING - __main__ - Process rank: 2, device: npu:2, n_gpu: 1distributed training: True, 16-bits training: False 07/18/2023 22:10:00 - WARNING - __main__ - Process rank: 1, device: npu:1, n_gpu: 1distributed training: True, 16-bits training: False 07/18/2023 22:10:00 - WARNING - __main__ - Process rank: 5, device: npu:5, n_gpu: 1distributed training: True, 16-bits training: False 07/18/2023 22:10:00 - WARNING - __main__ - Process rank: 3, device: npu:3, n_gpu: 1distributed training: True, 16-bits training: False 07/18/2023 22:10:00 - WARNING - datasets.builder - Found cached dataset glue (/root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad) 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 3/3 [00:00<00:00, 261.94it/s] 07/18/2023 22:10:00 - WARNING - __main__ - Process rank: 7, device: npu:7, n_gpu: 1distributed training: True, 16-bits training: False 07/18/2023 22:10:00 - WARNING - datasets.builder - Found cached dataset glue (/root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad) 0%| | 0/3 [00:00<?, ?it/s]07/18/2023 22:10:00 - WARNING - datasets.builder - Found cached dataset glue (/root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad) 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 3/3 [00:00<00:00, 224.74it/s] 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 3/3 [00:00<00:00, 229.06it/s] 07/18/2023 22:10:00 - WARNING - datasets.builder - Found cached dataset glue (/root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad) 07/18/2023 22:10:00 - WARNING - __main__ - Process rank: 4, device: npu:4, n_gpu: 1distributed training: True, 16-bits training: False 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 3/3 [00:00<00:00, 220.52it/s] 07/18/2023 22:10:00 - WARNING - datasets.builder - Found cached dataset glue (/root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad) 0%| | 0/3 [00:00<?, ?it/s]07/18/2023 22:10:00 - WARNING - __main__ - Process rank: 6, device: npu:6, n_gpu: 1distributed training: True, 16-bits training: False 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 3/3 [00:00<00:00, 218.07it/s] 07/18/2023 22:10:00 - WARNING - datasets.builder - Found cached dataset glue (/root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad) 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 3/3 [00:00<00:00, 144.02it/s] 07/18/2023 22:10:00 - WARNING - datasets.builder - Found cached dataset glue (/root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad) 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 3/3 [00:00<00:00, 220.95it/s] [WARNING|modeling_utils.py:3331] 2023-07-18 22:10:03,044 >> Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['classifier.bias', 'classifier.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. [WARNING|modeling_utils.py:3331] 2023-07-18 22:10:03,134 >> Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['classifier.weight', 'classifier.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. [WARNING|modeling_utils.py:3331] 2023-07-18 22:10:03,225 >> Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['classifier.weight', 'classifier.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. [WARNING|modeling_utils.py:3331] 2023-07-18 22:10:03,255 >> Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['classifier.bias', 'classifier.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. [WARNING|modeling_utils.py:3331] 2023-07-18 22:10:03,349 >> Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['classifier.bias', 'classifier.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. [WARNING|modeling_utils.py:3331] 2023-07-18 22:10:03,451 >> Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['classifier.bias', 'classifier.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. [WARNING|modeling_utils.py:3331] 2023-07-18 22:10:03,487 >> Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['classifier.bias', 'classifier.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. 07/18/2023 22:10:04 - WARNING - __main__ - Process rank: 0, device: npu:0, n_gpu: 1distributed training: True, 16-bits training: False 07/18/2023 22:10:04 - INFO - __main__ - Training/evaluation parameters TrainingArguments( _n_gpu=1, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_pin_memory=True, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, do_eval=True, do_predict=False, do_train=True, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'fsdp_min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=False, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=<HUB_TOKEN>, ignore_data_skip=False, include_inputs_for_metrics=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=2e-05, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=./output/runs/Jul18_22-09-51_localhost.localdomain, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=500, logging_strategy=steps, lr_scheduler_type=linear, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, no_cuda=False, num_train_epochs=3.0, optim=adamw_hf, optim_args=None, output_dir=./output, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=32, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=<PUSH_TO_HUB_TOKEN>, ray_scope=last, remove_unused_columns=True, report_to=[], resume_from_checkpoint=None, run_name=./output, save_on_each_node=False, save_safetensors=False, save_steps=500, save_strategy=steps, save_total_limit=None, seed=42, sharded_ddp=[], skip_memory_metrics=True, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=0, weight_decay=0.0, xpu_backend=None, ) 07/18/2023 22:10:04 - INFO - datasets.info - Loading Dataset Infos from /root/.cache/huggingface/modules/datasets_modules/datasets/glue/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad 07/18/2023 22:10:04 - INFO - datasets.builder - Overwrite dataset info from restored data version if exists. 07/18/2023 22:10:04 - INFO - datasets.info - Loading Dataset info from /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad 07/18/2023 22:10:04 - WARNING - datasets.builder - Found cached dataset glue (/root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad) 07/18/2023 22:10:04 - INFO - datasets.info - Loading Dataset info from /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 3/3 [00:00<00:00, 279.65it/s] [INFO|configuration_utils.py:710] 2023-07-18 22:10:04,346 >> loading configuration file bert-base-cased/config.json [INFO|configuration_utils.py:768] 2023-07-18 22:10:04,352 >> Model config BertConfig { "_name_or_path": "bert-base-cased", "architectures": [ "BertForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "classifier_dropout": null, "finetuning_task": "sst2", "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "model_type": "bert", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 0, "position_embedding_type": "absolute", "transformers_version": "4.31.0.dev0", "type_vocab_size": 2, "use_cache": true, "vocab_size": 28996 } [INFO|configuration_utils.py:710] 2023-07-18 22:10:04,353 >> loading configuration file bert-base-cased/config.json [INFO|configuration_utils.py:768] 2023-07-18 22:10:04,354 >> Model config BertConfig { "_name_or_path": "bert-base-cased", "architectures": [ "BertForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "classifier_dropout": null, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "model_type": "bert", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 0, "position_embedding_type": "absolute", "transformers_version": "4.31.0.dev0", "type_vocab_size": 2, "use_cache": true, "vocab_size": 28996 } [INFO|tokenization_utils_base.py:1841] 2023-07-18 22:10:04,355 >> loading file vocab.txt [INFO|tokenization_utils_base.py:1841] 2023-07-18 22:10:04,355 >> loading file tokenizer.json [INFO|tokenization_utils_base.py:1841] 2023-07-18 22:10:04,355 >> loading file added_tokens.json [INFO|tokenization_utils_base.py:1841] 2023-07-18 22:10:04,355 >> loading file special_tokens_map.json [INFO|tokenization_utils_base.py:1841] 2023-07-18 22:10:04,355 >> loading file tokenizer_config.json [INFO|configuration_utils.py:710] 2023-07-18 22:10:04,355 >> loading configuration file bert-base-cased/config.json [INFO|configuration_utils.py:768] 2023-07-18 22:10:04,356 >> Model config BertConfig { "_name_or_path": "bert-base-cased", "architectures": [ "BertForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "classifier_dropout": null, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "model_type": "bert", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 0, "position_embedding_type": "absolute", "transformers_version": "4.31.0.dev0", "type_vocab_size": 2, "use_cache": true, "vocab_size": 28996 } [INFO|modeling_utils.py:2600] 2023-07-18 22:10:04,436 >> loading weights file bert-base-cased/pytorch_model.bin [INFO|modeling_utils.py:3319] 2023-07-18 22:10:06,936 >> Some weights of the model checkpoint at bert-base-cased were not used when initializing BertForSequenceClassification: ['cls.predictions.transform.LayerNorm.bias', 'cls.predictions.bias', 'cls.predictions.transform.dense.bias', 'cls.seq_relationship.bias', 'cls.seq_relationship.weight', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.decoder.weight'] - This IS expected if you are initializing BertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing BertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). [WARNING|modeling_utils.py:3331] 2023-07-18 22:10:06,936 >> Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['classifier.weight', 'classifier.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. 07/18/2023 22:10:06 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-6dd95798535d1820.arrow 07/18/2023 22:10:06 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-0f938cc5dd2d8410.arrow 07/18/2023 22:10:06 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-d00cbf8279e5d543.arrow 07/18/2023 22:10:11 - INFO - __main__ - Sample 14592 of the training set: {'sentence': 'a great movie ', 'label': 1, 'idx': 14592, 'input_ids': [101, 170, 1632, 2523, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}. 07/18/2023 22:10:11 - INFO - __main__ - Sample 3278 of the training set: {'sentence': 'entertaining , if somewhat standardized , action ', 'label': 1, 'idx': 3278, 'input_ids': [101, 15021, 117, 1191, 4742, 18013, 117, 2168, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}. 07/18/2023 22:10:11 - INFO - __main__ - Sample 36048 of the training set: {'sentence': 'even when there are lulls , the emotions seem authentic , ', 'label': 1, 'idx': 36048, 'input_ids': [101, 1256, 1165, 1175, 1132, 181, 11781, 1116, 117, 1103, 6288, 3166, 16047, 117, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}. 07/18/2023 22:10:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-6dd95798535d1820.arrow 07/18/2023 22:10:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-6dd95798535d1820.arrow 07/18/2023 22:10:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-6dd95798535d1820.arrow 07/18/2023 22:10:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-6dd95798535d1820.arrow 07/18/2023 22:10:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-6dd95798535d1820.arrow 07/18/2023 22:10:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-6dd95798535d1820.arrow 07/18/2023 22:10:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-6dd95798535d1820.arrow 07/18/2023 22:10:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-0f938cc5dd2d8410.arrow 07/18/2023 22:10:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-0f938cc5dd2d8410.arrow 07/18/2023 22:10:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-0f938cc5dd2d8410.arrow 07/18/2023 22:10:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-0f938cc5dd2d8410.arrow 07/18/2023 22:10:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-0f938cc5dd2d8410.arrow 07/18/2023 22:10:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-0f938cc5dd2d8410.arrow 07/18/2023 22:10:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-d00cbf8279e5d543.arrow 07/18/2023 22:10:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-0f938cc5dd2d8410.arrow 07/18/2023 22:10:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-d00cbf8279e5d543.arrow 07/18/2023 22:10:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-d00cbf8279e5d543.arrow 07/18/2023 22:10:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-d00cbf8279e5d543.arrow 07/18/2023 22:10:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-d00cbf8279e5d543.arrow 07/18/2023 22:10:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-d00cbf8279e5d543.arrow 07/18/2023 22:10:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-d00cbf8279e5d543.arrow [INFO|trainer.py:763] 2023-07-18 22:10:11,652 >> The following columns in the training set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: idx, sentence. If idx, sentence are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message. [W LegacyTypeDispatch.h:79] Warning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. See Note [AutoDispatchBelowAutograd] for more details. (function operator()) [W LegacyTypeDispatch.h:79] Warning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. See Note [AutoDispatchBelowAutograd] for more details. (function operator()) [W LegacyTypeDispatch.h:79] Warning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. See Note [AutoDispatchBelowAutograd] for more details. (function operator()) [W LegacyTypeDispatch.h:79] Warning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. See Note [AutoDispatchBelowAutograd] for more details. (function operator()) [W LegacyTypeDispatch.h:79] Warning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. See Note [AutoDispatchBelowAutograd] for more details. (function operator()) [W LegacyTypeDispatch.h:79] Warning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. See Note [AutoDispatchBelowAutograd] for more details. (function operator()) [W LegacyTypeDispatch.h:79] Warning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. See Note [AutoDispatchBelowAutograd] for more details. (function operator()) [W LegacyTypeDispatch.h:79] Warning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. See Note [AutoDispatchBelowAutograd] for more details. (function operator()) [INFO|trainer.py:1686] 2023-07-18 22:10:14,427 >> ***** Running training ***** [INFO|trainer.py:1687] 2023-07-18 22:10:14,427 >> Num examples = 67,349 [INFO|trainer.py:1688] 2023-07-18 22:10:14,427 >> Num Epochs = 3 [INFO|trainer.py:1689] 2023-07-18 22:10:14,427 >> Instantaneous batch size per device = 32 [INFO|trainer.py:1692] 2023-07-18 22:10:14,427 >> Total train batch size (w. parallel, distributed & accumulation) = 256 [INFO|trainer.py:1693] 2023-07-18 22:10:14,427 >> Gradient Accumulation steps = 1 [INFO|trainer.py:1694] 2023-07-18 22:10:14,427 >> Total optimization steps = 792 [INFO|trainer.py:1695] 2023-07-18 22:10:14,429 >> Number of trainable parameters = 108,311,810 0%| | 0/792 [00:00<?, ?it/s][W reducer.cpp:1278] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator()) [W reducer.cpp:1278] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator()) [W reducer.cpp:1278] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator()) [W reducer.cpp:1278] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator()) [W reducer.cpp:1278] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator()) [W reducer.cpp:1278] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator()) [W reducer.cpp:1278] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator()) [W reducer.cpp:1278] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator()) 0%| | 1/792 [00:24<5:20:41, 24.33s/it] 63%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Ž | 500/792 [02:39<00:57, 5.10it/s]{'loss': 0.2132, 'learning_rate': 7.373737373737374e-06, 'epoch': 1.89} 63%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Ž | 500/792 [02:43<00:57, 5.10it/s][INFO|trainer.py:2807] 2023-07-18 22:13:00,287 >> Saving model checkpoint to ./output/checkpoint-500 [INFO|configuration_utils.py:458] 2023-07-18 22:13:00,289 >> Configuration saved in ./output/checkpoint-500/config.json [INFO|modeling_utils.py:1851] 2023-07-18 22:13:01,488 >> Model weights saved in ./output/checkpoint-500/pytorch_model.bin [INFO|tokenization_utils_base.py:2214] 2023-07-18 22:13:01,489 >> tokenizer config file saved in ./output/checkpoint-500/tokenizer_config.json [INFO|tokenization_utils_base.py:2221] 2023-07-18 22:13:01,489 >> Special tokens file saved in ./output/checkpoint-500/special_tokens_map.json 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 792/792 [03:44<00:00, 5.38it/s][INFO|trainer.py:1934] 2023-07-18 22:13:58,740 >> Training completed. Do not forget to share your model on huggingface.co/models =) {'train_runtime': 224.3121, 'train_samples_per_second': 900.741, 'train_steps_per_second': 3.531, 'train_loss': 0.17379718356662327, 'epoch': 3.0} 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 792/792 [03:44<00:00, 3.53it/s] [INFO|trainer.py:2807] 2023-07-18 22:13:58,745 >> Saving model checkpoint to ./output [INFO|configuration_utils.py:458] 2023-07-18 22:13:58,747 >> Configuration saved in ./output/config.json [INFO|modeling_utils.py:1851] 2023-07-18 22:13:59,855 >> Model weights saved in ./output/pytorch_model.bin [INFO|tokenization_utils_base.py:2214] 2023-07-18 22:13:59,857 >> tokenizer config file saved in ./output/tokenizer_config.json [INFO|tokenization_utils_base.py:2221] 2023-07-18 22:13:59,857 >> Special tokens file saved in ./output/special_tokens_map.json ***** train metrics ***** epoch = 3.0 train_loss = 0.1738 train_runtime = 0:03:44.31 train_samples = 67349 train_samples_per_second = 900.741 train_steps_per_second = 3.531 07/18/2023 22:13:59 - INFO - __main__ - *** Evaluate *** [INFO|trainer.py:763] 2023-07-18 22:13:59,922 >> The following columns in the evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: idx, sentence. If idx, sentence are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message. [INFO|trainer.py:3081] 2023-07-18 22:13:59,926 >> ***** Running Evaluation ***** [INFO|trainer.py:3083] 2023-07-18 22:13:59,926 >> Num examples = 872 [INFO|trainer.py:3086] 2023-07-18 22:13:59,926 >> Batch size = 8 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 14/14 [00:07<00:00, 1.85it/s] ***** eval metrics ***** epoch = 3.0 eval_accuracy = 0.9186 eval_loss = 0.258 eval_runtime = 0:00:09.61 eval_samples = 872 eval_samples_per_second = 90.662 eval_steps_per_second = 1.456 real 4m38.911s user 39m59.583s sys 4m9.578s ``` cc @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24879/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24879/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24879", "html_url": "https://github.com/huggingface/transformers/pull/24879", "diff_url": "https://github.com/huggingface/transformers/pull/24879.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24879.patch", "merged_at": 1689682833000 }
https://api.github.com/repos/huggingface/transformers/issues/24878
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24878/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24878/comments
https://api.github.com/repos/huggingface/transformers/issues/24878/events
https://github.com/huggingface/transformers/pull/24878
1,809,519,511
PR_kwDOCUB6oc5Vw5Mz
24,878
[`Docs`] Clarify 4bit docs
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? As discussed internally with @lewtun , this PR refactors a bit the 4bit docs by adding more clarifications about best practices and giving relevant pointers to users about advanced usage. Also fixed the requirements instructions as 4bit is now part of the latest release cc @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24878/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24878/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24878", "html_url": "https://github.com/huggingface/transformers/pull/24878", "diff_url": "https://github.com/huggingface/transformers/pull/24878.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24878.patch", "merged_at": 1689680348000 }
https://api.github.com/repos/huggingface/transformers/issues/24877
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24877/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24877/comments
https://api.github.com/repos/huggingface/transformers/issues/24877/events
https://github.com/huggingface/transformers/pull/24877
1,809,223,551
PR_kwDOCUB6oc5Vv4Qk
24,877
check if eval dataset is dict
{ "login": "hwijeen", "id": 29157715, "node_id": "MDQ6VXNlcjI5MTU3NzE1", "avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hwijeen", "html_url": "https://github.com/hwijeen", "followers_url": "https://api.github.com/users/hwijeen/followers", "following_url": "https://api.github.com/users/hwijeen/following{/other_user}", "gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}", "starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions", "organizations_url": "https://api.github.com/users/hwijeen/orgs", "repos_url": "https://api.github.com/users/hwijeen/repos", "events_url": "https://api.github.com/users/hwijeen/events{/privacy}", "received_events_url": "https://api.github.com/users/hwijeen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? Simply checks if `eval_dataset` is a dict and if it is, run a sequential evaluation on each evaluation dataset. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) #24832 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24877/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24877/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24877", "html_url": "https://github.com/huggingface/transformers/pull/24877", "diff_url": "https://github.com/huggingface/transformers/pull/24877.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24877.patch", "merged_at": 1689701622000 }
https://api.github.com/repos/huggingface/transformers/issues/24876
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24876/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24876/comments
https://api.github.com/repos/huggingface/transformers/issues/24876/events
https://github.com/huggingface/transformers/issues/24876
1,809,105,070
I_kwDOCUB6oc5r1MCu
24,876
Model saving have dimension issue while using deepspeed stage 3 with multi node for larger models
{ "login": "dittops", "id": 12937285, "node_id": "MDQ6VXNlcjEyOTM3Mjg1", "avatar_url": "https://avatars.githubusercontent.com/u/12937285?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dittops", "html_url": "https://github.com/dittops", "followers_url": "https://api.github.com/users/dittops/followers", "following_url": "https://api.github.com/users/dittops/following{/other_user}", "gists_url": "https://api.github.com/users/dittops/gists{/gist_id}", "starred_url": "https://api.github.com/users/dittops/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dittops/subscriptions", "organizations_url": "https://api.github.com/users/dittops/orgs", "repos_url": "https://api.github.com/users/dittops/repos", "events_url": "https://api.github.com/users/dittops/events{/privacy}", "received_events_url": "https://api.github.com/users/dittops/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@dittops \r\nTo be fair, it was said that it was resolved, but we have no means of validating that, since the actual code was not shared. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,689
1,693
1,693
NONE
null
### System Info Python 3.8.10 transformers 4.30.2 accelerate 0.20.3 deepspeed 0.9.5 ### Who can help? @pacman100 ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I'm finetuning LLaMa30B using the command below. `accelerate launch src/train_sft.py --model_name_or_path huggyllama/llama-30b --do_train --dataset dummy_identity --finetuning_type full --output_dir output/30B-sft-identity-v1 --overwrite_cache --per_device_train_batch_size 4 --gradient_accumulation_steps 4 --lr_scheduler_type cosine --logging_steps 10 --save_steps 200 --learning_rate 5e-5 --num_train_epochs 3 --plot_loss --fp16 --deepspeed ds_config.json --report_to wandb` I'm running this on 2 nodes with 4 A100 80GB on each. For deepspeed I'm using stage 3. When I tried to load the saved model it's giving the error as below. But this issue is not there for LLaMa 7B. So I assume it has something to do with the stage 3 optimization and gathering for saving. <img width="1156" alt="image" src="https://github.com/huggingface/transformers/assets/12937285/d4f1b234-b67b-457a-84b3-2f64c6101418"> This is my deepspeed config ``` { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } }, "zero_optimization": { "stage": 3, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "steps_per_print": 2000, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false } ``` A similar [issue](https://github.com/hiyouga/LLaMA-Efficient-Tuning/issues/70#issuecomment-1626876405) was reported on the open-source repo I'm using. I can see they were able to resolve it without using HF trainer and write a script with accelerate. ### Expected behavior The multinode stage 3 trained model should be saved without any errors.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24876/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24876/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24875
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24875/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24875/comments
https://api.github.com/repos/huggingface/transformers/issues/24875/events
https://github.com/huggingface/transformers/pull/24875
1,808,989,886
PR_kwDOCUB6oc5VvFnl
24,875
Remove jnp.DeviceArray since it is deprecated.
{ "login": "mariecwhite", "id": 5143063, "node_id": "MDQ6VXNlcjUxNDMwNjM=", "avatar_url": "https://avatars.githubusercontent.com/u/5143063?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariecwhite", "html_url": "https://github.com/mariecwhite", "followers_url": "https://api.github.com/users/mariecwhite/followers", "following_url": "https://api.github.com/users/mariecwhite/following{/other_user}", "gists_url": "https://api.github.com/users/mariecwhite/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariecwhite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariecwhite/subscriptions", "organizations_url": "https://api.github.com/users/mariecwhite/orgs", "repos_url": "https://api.github.com/users/mariecwhite/repos", "events_url": "https://api.github.com/users/mariecwhite/events{/privacy}", "received_events_url": "https://api.github.com/users/mariecwhite/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @sanchit-gandhi ", "@sanchit-gandhi This pull request should be prioritised since the affected Flax models are currently not usable at all.", "@mariecwhite Thanks for opening this PR! Running `make style` and pushing the changes will resolve the code quality CI checks", "Hey @mariecwhite - it seems there is an issue with your CircleCI permissions, meaning the tests won't run!\r\nCould you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)? Let me know if you encounter any issues!", "_The documentation is not available anymore as the PR was closed or merged._", "> Hey @mariecwhite - it seems there is an issue with your CircleCI permissions, meaning the tests won't run! Could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)? Let me know if you encounter any issues!\r\n\r\nI just updated my CircleCI permissions and rebased.", "CircleCI is still failing for me. Since this is a priority, I'm happy to let https://github.com/huggingface/transformers/pull/25275 get merged instead of this.", "Forced circleCI to run by also pushing the same branch on the main fork of Transformers. Should solve the tests issues (at least if there are no further commits needed ๐Ÿ˜… )" ]
1,689
1,691
1,691
CONTRIBUTOR
null
The latest version of JAX deprecates jax.numpy.DeviceArray. When using this version with Transformers, we get this error when instantiating FlaxBertModel: `module 'jax.numpy' has no attribute 'DeviceArray'`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24875/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24875/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24875", "html_url": "https://github.com/huggingface/transformers/pull/24875", "diff_url": "https://github.com/huggingface/transformers/pull/24875.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24875.patch", "merged_at": 1691170618000 }
https://api.github.com/repos/huggingface/transformers/issues/24874
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24874/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24874/comments
https://api.github.com/repos/huggingface/transformers/issues/24874/events
https://github.com/huggingface/transformers/issues/24874
1,808,983,458
I_kwDOCUB6oc5r0uWi
24,874
NotImplementedError: offload_to_cpu=True and NO_SHARD is not supported yet
{ "login": "linkailuo1986", "id": 95203644, "node_id": "U_kgDOBayxPA", "avatar_url": "https://avatars.githubusercontent.com/u/95203644?v=4", "gravatar_id": "", "url": "https://api.github.com/users/linkailuo1986", "html_url": "https://github.com/linkailuo1986", "followers_url": "https://api.github.com/users/linkailuo1986/followers", "following_url": "https://api.github.com/users/linkailuo1986/following{/other_user}", "gists_url": "https://api.github.com/users/linkailuo1986/gists{/gist_id}", "starred_url": "https://api.github.com/users/linkailuo1986/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/linkailuo1986/subscriptions", "organizations_url": "https://api.github.com/users/linkailuo1986/orgs", "repos_url": "https://api.github.com/users/linkailuo1986/repos", "events_url": "https://api.github.com/users/linkailuo1986/events{/privacy}", "received_events_url": "https://api.github.com/users/linkailuo1986/received_events", "type": "User", "site_admin": false }
[ { "id": 5616426447, "node_id": "LA_kwDOCUB6oc8AAAABTsPdzw", "url": "https://api.github.com/repos/huggingface/transformers/labels/solved", "name": "solved", "color": "B1D6DC", "default": false, "description": "" } ]
closed
false
null
[]
[ "Can you please give us a code reproducer of the issue? cc @pacman100 ", "> Can you please give us a code reproducer of the issue? cc @pacman100\r\n\r\nThanks. Sure, here is the code:\r\n\r\n`torchrun train.py \\\r\n --model_name_or_path openlm-research/open_llama_3b \\\r\n --data_path /path/to/data \\\r\n --bf16 True \\\r\n --output_dir /path/to/output \\\r\n --num_train_epochs 3 \\\r\n --per_device_train_batch_size 2 \\\r\n --per_device_eval_batch_size 2 \\\r\n --gradient_accumulation_steps 4 \\\r\n --evaluation_strategy \"no\" \\\r\n --eval_steps 1500 \\\r\n --save_strategy \"steps\" \\\r\n --save_steps 2 \\\r\n --save_total_limit 3 \\\r\n --learning_rate 2e-5 \\\r\n --weight_decay 0. \\\r\n --warmup_ratio 0.04 \\\r\n --lr_scheduler_type \"cosine\" \\\r\n --logging_steps 1 \\\r\n --fsdp \"shard_grad_op auto_wrap\" \\\r\n --fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \\\r\n --tf32 True \\\r\n --model_max_length 2048 \\\r\n --gradient_checkpointing True \\\r\n --lazy_preprocess True\r\n`\r\n\r\n`train.py` can be found from [here](https://github.com/lm-sys/FastChat/blob/main/fastchat/train/train.py)\r\nYou can use dummy data from [here](https://github.com/lm-sys/FastChat/blob/main/data/dummy_conversation.json)", "Hello, thank you @linkailuo1986, the above PR should fix it. But it doesn't make sense to use FSDP on a single GPU", "Thanks @pacman100 for the quick fix. I used FSDP because it seemd to reduce VRAM for a larger batch size, which otherwise got an OOM error without using it." ]
1,689
1,689
1,689
NONE
null
### System Info I was using fsdp with settings "full_shard auto_wrap" on a A100 GPU. The training went well but was interupted when saving the checkpoints. The error stated `NotImplementedError: offload_to_cpu=True and NO_SHARD is not supported yet`. I understand that I am using a single GPU so fsdp defaluts to NO_SHAPR. However, I dont understand why offload_to_cpu was set to True. Or anywhere I can reset it to false? ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction following https://github.com/lm-sys/FastChat to fine-tune an LLM ### Expected behavior the error as stated.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24874/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24874/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24873
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24873/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24873/comments
https://api.github.com/repos/huggingface/transformers/issues/24873/events
https://github.com/huggingface/transformers/issues/24873
1,808,918,347
I_kwDOCUB6oc5r0edL
24,873
ZeroShotClassificationPipeline has large memory spikes when using a lot of candidate_labels
{ "login": "rsmith49", "id": 17658617, "node_id": "MDQ6VXNlcjE3NjU4NjE3", "avatar_url": "https://avatars.githubusercontent.com/u/17658617?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rsmith49", "html_url": "https://github.com/rsmith49", "followers_url": "https://api.github.com/users/rsmith49/followers", "following_url": "https://api.github.com/users/rsmith49/following{/other_user}", "gists_url": "https://api.github.com/users/rsmith49/gists{/gist_id}", "starred_url": "https://api.github.com/users/rsmith49/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rsmith49/subscriptions", "organizations_url": "https://api.github.com/users/rsmith49/orgs", "repos_url": "https://api.github.com/users/rsmith49/repos", "events_url": "https://api.github.com/users/rsmith49/events{/privacy}", "received_events_url": "https://api.github.com/users/rsmith49/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[ { "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false } ]
[ "Hi @rsmith49 \r\n\r\nThank you for opening this issue ๐Ÿค— . I will take a look!", "> could you confirm that the issue happens only when the results (of all 1500 inference calls) are saved? (I think it's yes?)\r\n\r\nI have been running in a jupyter notebook, which I think does save the results from calling the pipeline since it is the final statement in the cell - let me try in a regular python process and see if the memory spikes the same.\r\n\r\nI should note though that the \"1500 inference calls\" I mentioned are only over 20 documents - since there are 130 `candidate_labels`, the pipeline calls the model for inference 2600 times (130 * 20). So saving the results here will be 20 dicts with ranked scores for each `candidate_label`.\r\n\r\n> when you save the model inference results, do you also contain those returned past_key_values? (I guess not ..?)\r\n\r\nCorrect, the result from the pipeline does not contain the `past_key_values`. The \"storing in a single list\" code occurs [here](https://github.com/huggingface/transformers/blob/dd49404a897f84622d38254fe90cd07d8c1640b0/src/transformers/pipelines/base.py#L1103), and stepping through with `pdb` shows the iterator's internal function creating a reference to the `past_key_values` at each `__next__` call", "Hi, I am not able to reproduce with the following (slightly modified) script (see at the end), running in python directly\r\n\r\n```bash\r\n\r\niteration: 0\r\n\r\nRAM: 4318.6015625 MB\r\ntiming: 18.248116 sec.\r\n\r\n==============\r\n\r\niteration: 156\r\nRAM: 4319.5 MB\r\ntiming: 18.464201 sec\r\n```\r\nIt would be great if you can try to see if the issue happens with python script only.\r\n\r\nHowever, this is a sequence classification model, and the `past_key_values` is not used by this model.\r\nI will try to have a fix anyway. Thank you again for showing this issue to us!\r\n\r\n\r\n```python\r\nfrom transformers import pipeline\r\n\r\ntmp_repro_data = ['I purchased this to replace my 7 yr old video baby monitor that had been dropped too many times.'] * 20\r\n\r\nckpt = 'facebook/bart-large-mnli'\r\n# ckpt = 'facebook/bart-base'\r\n\r\np = pipeline(\r\n 'zero-shot-classification',\r\n model=ckpt,\r\n device=\"cuda\",\r\n batch_size=20,\r\n)\r\n\r\nimport pdb; pdb.set_trace()\r\n\r\ndef _revised_forward(self, inputs):\r\n candidate_label = inputs[\"candidate_label\"]\r\n sequence = inputs[\"sequence\"]\r\n model_inputs = {k: inputs[k] for k in self.tokenizer.model_input_names} #type: ignore\r\n outputs = self.model(**model_inputs, use_cache=False)\r\n\r\n model_outputs = {\r\n \"candidate_label\": candidate_label,\r\n \"sequence\": sequence,\r\n \"is_last\": inputs[\"is_last\"],\r\n **outputs,\r\n }\r\n return model_outputs\r\n\r\n# With this line it works as expected, without it memory spikes. The only difference between `revised_forward`\r\n# and the transformers repo is that we pass `use_cache=False` as an extra arg to inference with `self.model`\r\n\r\nimport psutil\r\nimport os\r\nprocess = psutil.Process(os.getpid())\r\n\r\n#p._forward = _revised_forward.__get__(p)\r\n\r\nimport datetime\r\n\r\nfor i in range(1000):\r\n s = datetime.datetime.now()\r\n o = p(\r\n tmp_repro_data,\r\n multi_label=True,\r\n candidate_labels=list(range(130)),\r\n )\r\n e = datetime.datetime.now()\r\n d = (e-s).total_seconds()\r\n mem = process.memory_info()[0] / float(2 ** 20)\r\n print(i)\r\n print(mem)\r\n print(d)\r\n print(\"=\" * 80)\r\n```", "Thanks for looking into this! \n\nWeirdly, I also did not see memory spikes when using a single text snippet copied 20 times, only when using 20 unique strings (I'm guessing something to do with caching somewhere in either python, torch, or transformers that makes garbage collection more effective). So if you could try using the example list I posted above that may do it.\n\nHaven't had a chance to run the script in a pure python process but will let you know when I do!", "That would be nice to know! (I am opening a PR soon anyway :-) )", "Ran the script as just `python tmp_script.py` and saw memory go as high as 7.9Gi before I killed the process, somewhere around 1187 samples (NOTE: same environment as above, transformers==4.27.4, python version 3.8.0). So it looks like it occurs not just when saving the result of `p(...)`, and is not just an artifact of notebooks ๐Ÿ‘ ", "> go as high as 7.9Gi\r\n\r\nYou use the different 20 text sentences in `tmp_repro_data`, right?\r\n\r\n(I am running with the same text repeated 20 times, with latest `main` of `transformers`.)", "> You use the different 20 text sentences in tmp_repro_data, right?\r\n\r\nYes, not sure why repeating the same text doesn't trigger it, but I get the same result as you when using repeated text" ]
1,689
1,689
1,689
NONE
null
### System Info ``` - `transformers` version: 4.27.4 - Platform: Linux-5.4.0-1103-aws-x86_64-with-glibc2.27 - Python version: 3.8.0 - Huggingface_hub version: 0.16.4 - PyTorch version (GPU?): 1.8.1+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: YES - Using distributed or parallel set-up in script?: NO ``` ### Who can help? @Narsil ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction # Repro Steps Cell ``` from transformers import pipeline tmp_repro_data = ['What a wonderful creation. Art, in our house, comes in many colors Thanks to Crayola! We unfortunately never seem to have enough crayons, though!', 'I purchased this to replace my 7 yr old video baby monitor that had been dropped too many times. I love this monitor. It also works with my old Summer monitor camera.', 'This float is very comfortable (and we LOVE the cup holders) but the headrest is a little too high to be universally comfortable. I think letting some air out of it would solve the problem.', 'I have Marmite on toast every morning. Although I have been told by an Australian friend that Marmite is not up to the standards of Vegemite, I cannot tell the difference. An English friend tells me Marmite is better. Go figure.:) I love it and make certain to never run out of Marmite. This is one of those', "This was the only reason I could get anything done once we got home. My daughter always wanted to be held, but once you would lay her under here (especially the mirror), she was mesmerized. I highly recommend buying the extra set of toys also! Even though she's older now and doesn't play with it as much - she still loves to take all of the toys with her and play with them alone!!!", 'This is the best packaged coconut water that I have ever tasted. It reminds me of the fresh coconut water right from the tree that I used to have in Jamaica. I have tried other brands of coconut water, but none can compare to Vita Coco. That is the only brand that I will buy...other that buying the real green coconut.', "I specifically looked for this product online because I couldn't find it in the local drugstore any longer. This really does what it says it does - it gives you a matte finish which lasts pretty much all day - you will notice a difference when you use it and when you don't. If you tend to get oily skin during the day (T-zone or otherwise) this minimizes that significantly. I hope I can always find this product!", "We got this for my daughter's first birthday and she loves it. She can make the animals or just push the frog and dance to the songs. It's also easy to take with you and at home, she can carry the little farm house to another room if she wants. (It's been thrown all over and hasn't broken, which is another plus.) We may get tired of the same animal facts and songs but she never does.", 'I love Bare Escentuals bare minerals products. I treated myself to 4 of the products. I found the accompanying brushes high in quality. I wash my brushes periodically to prevent break outs. The brushes do well. I have had many compliments on my complection. Even though I am older I still get breakouts. The minerals have helped to decrease the flare- ups.**** I can well identify with the comments by some customers about dry skin and looking older.**** I absolutely must use a moisturizer with each application. I wish so much this moisturizing issue would be addressed with TV presentations. Otherwise, without liberal application of a moisturizer, my skin would look extremely dry and chalky no matter how beautiful the glow! Also lines are quite visible if moisure is insufficient. In spite of all of this, I have found minerals to be a great makeup.It is worth the money and time to continue my use of a moisturizer routine I have used for years. Ann HannaI also wish the lids were plastic for easy washing after use. I use an alcohol wipe to clean the inside of the lids periodically.', "My 10 month old son loves this toy! It is one of his favorites. Not only does he like to put the shapes (and any other toy that will fit) in the holes, but he also loves to play with the shapes on the floor, especially the ones that shake and make noise. He also likes this toy a lot because he can open and close the lid of the drum, repeatedly putting in and taking out shapes and toys. The Shape Rattle 'n Roll has definitely been worth the mere five dollars that it cost!", "I have been looking a long time for gum that is not made of materials bad for your health. I'm not worried about taste, but this tastes good and more importantly for me it is healthy and chews well. Some of the healthy chewing gums just fall apart. to me, healthy chewing gum means it doesn't have sugar or the horrible chemicals you find in the sugarless gums sold at grocery stores.", 'We adopted two cats from a rescue shelter, a male first and then a female a couple of days later. They got along okay in the beginning but became more and more jealous of each other. The male had to be the boss of everything...food, toys and attention. The female started getting back at him about two months later by wetting in his favorite hang-out spots on the floor and then on my husband\'s leather recliner. I tried Nature\'s Miracle on the carpet first but it didn\'t work. The smell was still there and she went back and wet on the same spot.After researching online and reading the reviews and tips from other customers, I ordered a gallon of Urine Off For Cats through Amazon, as well as a blacklight from Walmart and a big marinade infuser syringe from Linens N Things. The blacklight found spots we were unaware of, including under the recliner. I took masking tape and marked the area about 6" beyond each spot on the carpet and then marked spots about 4" apart within each circle. I poured about 3 cups of Urine Off into a 4 cup measuring cup to make it easier to draw the solution into the syringe. Then I injected each spot with a full syringe of Urine Off, marking each spot with an X in pen on the masking tape as I went along so I knew where I had already injected. Eventually the tip on the syringe was bending so I found that it was easier to use a big skewer to poke the hole first and then push the syringe into the carpet. When I finished injecting, I then filled a pump sprayer with the solution and saturated the top of the carpet. I covered the spots with plastic garbage bags for 2 days and then allowed them to air dry.For the leather recliner I had to pull the leather covers away from the back of the cushions and spray the leather both inside and out and around the zippers. I injected the cushions with the syringe like I did the carpet and put them in garbage bags. I also put a plastic tarp under the chair and sprayed everywhere the urine may have gone on top and underneath of the chair including any padding and all of the metal and springs. Then I covered it all with plastic bags for 2 days before letting it air dry. Check the cushions to be sure mold doesn\'t start growing. I removed them from the plastic early. I used a leather conditioner afterwards to restore the pliability to the leather. The metal underneath began to rust in some places, but it came off with WD 40 when we treated the hinges afterwards.I wish I could say I only had to do all of this work once, but I had to repeat it a second time before all of the smell was gone to my sensitive nose. To be fair, the directions say it may take two or more treatments for the smell to be eliminated. Also, I had used the Nature\'s Miracle on the two small spots I was aware of first which may have made it harder for the Urine Off to work. I\'m sure I didn\'t get down to the subfloor with the Nature\'s Miracle and I didn\'t cover it with plastic. In fact, I used a fan on it to dry it faster which I now know is the opposite of what you should do. But because the Urine Off directions said you had to saturate the carpet, the padding and the subfloor below the padding, and also to widen the area that you treat beyond the original spots, I had to buy a 2nd gallon. To repeat the process, I bought a 3rd gallon. But the end result is that we don\'t smell any urine odor. Only a slight lemony smell of the solution remains. I was able to save our $1800 recliner, but sadly, my husband insisted that our female cat go back to the rescue shelter. I\'m sure she will be much better off as an only cat who is queen of her castle just as our boy enjoys being king.', "We like trying new flours for our home made bread (mostly sourdough). Spelt works very well with sourdough starter. Gives the bread a subtle nut like flavor and fine texture. Plus, it doesn't affect the rise very much (we do add gluten to assist the sourdough rise). Of amaranth, teff, and spelt, we like spelt the best.", 'One of the many things I can do during the day is use smaller amounts of paper to try to reduce my carbon footprint. I know it would be better to not use papertowels at all, but this is surely a good alternative for those of us on the path to global warming enlightenment.', 'The product is wonderful - it leaves the laundry smelling very fresh. It is a pleasure to deal with the folks at [...]. They are very quick with their deliveries and responsive.', "I originally gave this product a 5 star review, but I was forced to edit the review after 3 months of use. I find the major problem with this item is that if you have a very messy diaper and can't tightly bundle it without a mess, the mess gets all over the module that dumps the diaper and the stink is OUTSIDE of the diaper pail.I used this pail with both cloth and disposable diapers (during different time periods). The first month it worked great for both, in my opinion, although it didn't hold near as many cloth diapers and they occasionally needed some help getting into the pail. However, once my baby grew into the next size of cloth diapers, it was IMPOSSIBLE to use the champ with them. Now, I understand this pail is not marketed for use with cloth diapers so I won't hold that against it, however just in case you are considering it for such a purpose as I did, DON'T.The last complaint I have with this pail is that after only 2 months of use the inner seal broke and fumes were all over the room. I did NOT call the company for a replacement as the other reviewer did, because I cannot use the pail with my cloth diapers.This pail has become garage sale fare.", "It's a nice replica toy for children. Good work with the full retractable blade, but not very shinny (the similar saber created before shines much more). a little big for kids, but fun to play with. my son loves it.", "This machine has plenty of power. I have only used it to make pizza dough and it worked extremely well. The dough came out great and I can't wait to use the shredding blades next.", 'This pasta has a wonderful natural flavor and is naturally high in fiber and nutrients. I eat it plain sometimes, and use it in place of white rice and other more processed grains.', 'I really recommend this product. The price on Amazon was a lot better than I could find in any store. The product arrived ahead of expected delivery time. It works really well, its quick to heat up and does a really good job of smoothing down my thick hair!'] p = pipeline( 'zero-shot-classification', model='facebook/bart-large-mnli', device=0, ) def _revised_forward(self, inputs): candidate_label = inputs["candidate_label"] sequence = inputs["sequence"] model_inputs = {k: inputs[k] for k in self.tokenizer.model_input_names} #type: ignore outputs = self.model(**model_inputs, use_cache=False) model_outputs = { "candidate_label": candidate_label, "sequence": sequence, "is_last": inputs["is_last"], **outputs, } return model_outputs # With this line it works as expected, without it memory spikes. The only difference between `revised_forward` # and the transformers repo is that we pass `use_cache=False` as an extra arg to inference with `self.model` #p._forward = _revised_forward.__get__(p) p( tmp_repro_data, multi_label=True, candidate_labels=list(range(130)), ) ``` ### Expected behavior Letting this script run as is causes memory (CPU memory, not GPU memory) to spike over 10Gi at around 1500 inference calls. This can break a lot of environments, especially anything involving running jobs on resource constrained machines. After some debugging, we traced this to the `past_key_values` object being returned by the Bart model, which was a tuple of some very large tensors. We suspect that these large tensors are causing garbage collection to not be able to catch up when storing all of these model inference requests in a single list. Passing `use_cache=False` to model inference (and therefore not returning the `past_key_values` object) fixes the memory spikes, making us think this was indeed the issue.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24873/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24873/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24872
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24872/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24872/comments
https://api.github.com/repos/huggingface/transformers/issues/24872/events
https://github.com/huggingface/transformers/issues/24872
1,808,895,688
I_kwDOCUB6oc5r0Y7I
24,872
`main_input_name` is None if `predict_with_generate` in keras_callbacks.py for encoder-decoder(Bert-Bert) TF models
{ "login": "saichandrapandraju", "id": 41769919, "node_id": "MDQ6VXNlcjQxNzY5OTE5", "avatar_url": "https://avatars.githubusercontent.com/u/41769919?v=4", "gravatar_id": "", "url": "https://api.github.com/users/saichandrapandraju", "html_url": "https://github.com/saichandrapandraju", "followers_url": "https://api.github.com/users/saichandrapandraju/followers", "following_url": "https://api.github.com/users/saichandrapandraju/following{/other_user}", "gists_url": "https://api.github.com/users/saichandrapandraju/gists{/gist_id}", "starred_url": "https://api.github.com/users/saichandrapandraju/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/saichandrapandraju/subscriptions", "organizations_url": "https://api.github.com/users/saichandrapandraju/orgs", "repos_url": "https://api.github.com/users/saichandrapandraju/repos", "events_url": "https://api.github.com/users/saichandrapandraju/events{/privacy}", "received_events_url": "https://api.github.com/users/saichandrapandraju/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[ { "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }, { "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false } ]
[ "Thank you for reproting @saichandrapandraju ๐Ÿค— \r\n\r\ncc @Rocketknight1 (when he is back, or I can take a look after finishing some other tasks ) ", "Sure @ydshieh , once this is verified to be valid, I can create a PR ๐Ÿ™‚" ]
1,689
1,689
1,689
NONE
null
### System Info - `transformers` version: 4.30.2 - Platform: Linux-5.15.109+-x86_64-with-glibc2.31 - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.7.0 (gpu) - Jax version: 0.4.13 - JaxLib version: 0.4.13 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @gante , @Rocketknight1 ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` KeyError Traceback (most recent call last) [<ipython-input-49-089aabb58b9b>](https://localhost:8080/#) in <cell line: 1>() ----> 1 history = model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=num_epochs, callbacks=callbacks) 1 frames [/usr/local/lib/python3.10/dist-packages/transformers/keras_callbacks.py](https://localhost:8080/#) in on_epoch_end(self, epoch, logs) 217 if self.predict_with_generate: 218 if isinstance(batch, dict): --> 219 generation_inputs = batch[main_input_name] 220 attention_mask = batch.get("attention_mask", None) 221 else: KeyError: None ``` [Here's](https://colab.research.google.com/drive/1d75HqymedDSopRDXDBSNIBGT1q7zvCWz?usp=sharing) the colab link to reproduce the error. because of this code in `keras_callbacks.py` (commented with >>>>> .... <<<<<< for better understanding)- ``` #### in tf_keras_callback (in func `on_epoch_end`, ~line 191) main_input_name = None if self.predict_with_generate: # This dense conditional recognizes the case where we have an encoder-decoder model, but # avoids getting tangled up when we just have a model with a layer called 'encoder' if hasattr(self.model, "encoder") and hasattr(self.model.encoder, "main_input_name"): # >>>>>>> If this condition is not satisfied(which is the case currently), `main_input_name` remains None <<<<<<<< if self.model.encoder.main_input_name != self.model.main_input_name: main_input_name = self.model.encoder.main_input_name else: main_input_name = getattr(self.model, "main_input_name", "input_ids") if self.use_xla_generation and self.generation_function is None: def generation_function(inputs, attention_mask): return self.model.generate(inputs, attention_mask=attention_mask, **self.generate_kwargs) self.generation_function = tf.function(generation_function, jit_compile=True) prediction_list = [] label_list = [] # The whole predict/generate loop is handled inside this method for batch in self.eval_dataset: if isinstance(batch, tuple): batch, labels = batch else: labels = None if self.predict_with_generate: if isinstance(batch, dict): generation_inputs = batch[main_input_name] # >>>>>>>>>>>> `main_input_name` remains None here (~line 219) <<<<<<<<<<<< attention_mask = batch.get("attention_mask", None) else: generation_inputs = batch attention_mask = None if self.use_xla_generation: predictions = self.generation_function(generation_inputs, attention_mask=attention_mask) else: predictions = self.model.generate( generation_inputs, attention_mask=attention_mask, **self.generate_kwargs ) ``` ### Expected behavior `main_input_name should` be `input_ids` for that the following function can be modified - ``` if hasattr(self.model, "encoder") and hasattr(self.model.encoder, "main_input_name"): if self.model.encoder.main_input_name != self.model.main_input_name: main_input_name = self.model.encoder.main_input_name ``` to something like - ``` if hasattr(self.model, "encoder") and hasattr(self.model.encoder, "main_input_name"): main_input_name = self.model.encoder.main_input_name ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24872/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24872/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24871
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24871/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24871/comments
https://api.github.com/repos/huggingface/transformers/issues/24871/events
https://github.com/huggingface/transformers/issues/24871
1,808,842,241
I_kwDOCUB6oc5r0L4B
24,871
Trainer is always using IPEX, even when use_ipex=False
{ "login": "dmsuehir", "id": 13952606, "node_id": "MDQ6VXNlcjEzOTUyNjA2", "avatar_url": "https://avatars.githubusercontent.com/u/13952606?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dmsuehir", "html_url": "https://github.com/dmsuehir", "followers_url": "https://api.github.com/users/dmsuehir/followers", "following_url": "https://api.github.com/users/dmsuehir/following{/other_user}", "gists_url": "https://api.github.com/users/dmsuehir/gists{/gist_id}", "starred_url": "https://api.github.com/users/dmsuehir/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dmsuehir/subscriptions", "organizations_url": "https://api.github.com/users/dmsuehir/orgs", "repos_url": "https://api.github.com/users/dmsuehir/repos", "events_url": "https://api.github.com/users/dmsuehir/events{/privacy}", "received_events_url": "https://api.github.com/users/dmsuehir/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[ { "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false } ]
[ "cc @muellerzr (right?)", "This is a problem that should be solved in Accelerate, I'll work on a PR today with this. Thanks for the flag!\r\n\r\nEdit: actually this can be solved in the training args, PR coming shortly", "@dmsuehir can you try running again with `pip install git+https://github.com/huggingface/transformers@muellerzr-ipex` and set `use_ipex` to `False`? (it's the default)", "@muellerzr Yes, the fix in your branch works. Thanks!", "@muellerzr By the way, I think `no_cuda` and `ACCELERATE_USE_CPU` may have the same issue, but I don't have a GPU on my machine to verify." ]
1,689
1,689
1,689
CONTRIBUTOR
null
### System Info - `transformers` version: 4.32.0.dev0 - Platform: Linux-5.15.0-75-generic-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu117 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @sgugger ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Steps to reproduce the behavior: 1. The issue can be reproduced with the [text-classification example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification) script (other scripts would have the same issue). I have `intel-extension-for-pytorch==2.0.100` installed in my environment and am running the following command to run_glue.py without `use_ipex` (so it should default to `False`): ``` export MODEL_NAME=distilbert-base-uncased export OUTPUT_DIR=/home/dmsuehir/glue_output export TASK_NAME=mrpc python run_glue.py \ --model_name_or_path $MODEL_NAME \ --task_name $TASK_NAME \ --do_train \ --max_seq_length 128 \ --per_device_train_batch_size 64 \ --learning_rate 2e-5 \ --num_train_epochs 1 \ --no_cuda \ --output_dir $OUTPUT_DIR \ --bf16 ``` The train metrics I see with this run are: ``` ***** train metrics ***** epoch = 1.0 train_loss = 0.6083 train_runtime = 0:00:37.35 train_samples = 3668 train_samples_per_second = 98.191 train_steps_per_second = 1.553 ``` Note that we are seeing `98.191` samples/second. 2. Next try running the same command, except adding on `--use_ipex`. Note that I am also deleting my output directory between runs. ``` python run_glue.py \ --model_name_or_path $MODEL_NAME \ --task_name $TASK_NAME \ --do_train \ --max_seq_length 128 \ --per_device_train_batch_size 64 \ --learning_rate 2e-5 \ --num_train_epochs 1 \ --no_cuda \ --output_dir $OUTPUT_DIR \ --bf16 \ --use_ipex ``` I see a similar training metric for `train_samples_per_second` as step 1: ``` ***** train metrics ***** epoch = 1.0 train_loss = 0.6083 train_runtime = 0:00:37.94 train_samples = 3668 train_samples_per_second = 96.654 train_steps_per_second = 1.528 ``` 3. Finally, I had debugged this issue to look into how IPEX is being used in the Trainer. I found that it can be called in two places: (1) it can get called from the Trainer [here](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L1310) or (2) it can get called by accelerate [here](https://github.com/huggingface/accelerate/blob/main/src/accelerate/accelerator.py#L1748). The Trainer is properly respecting the `use_ipex` arg, however, it appears that accelerate is always using IPEX if it's installed. Digging deeper into this, I found that accelerate would only not use IPEX if [`ACCELERATE_USE_IPEX` gets set to False/0](https://github.com/huggingface/accelerate/blob/main/src/accelerate/state.py#L765). To confirm this, I manually set `ACCELERATE_USE_IPEX=0` and then ran the same script/args from step 1: ``` export ACCELERATE_USE_IPEX=0 python run_glue.py \ --model_name_or_path $MODEL_NAME \ --task_name $TASK_NAME \ --do_train \ --max_seq_length 128 \ --per_device_train_batch_size 64 \ --learning_rate 2e-5 \ --num_train_epochs 1 \ --no_cuda \ --output_dir $OUTPUT_DIR \ --bf16 ``` And now I see these training metrics, where we see a drop in `train_samples_per_second`, which indicates that IPEX has actually been turned off now that the env var was used: ``` ***** train metrics ***** epoch = 1.0 train_loss = 0.697 train_runtime = 0:01:07.74 train_samples = 3668 train_samples_per_second = 54.143 train_steps_per_second = 0.856 ``` ### Expected behavior When `use_ipex` is not given or set to `False`, IPEX optimize should not get called. If it's agreed that this is in fact a bug, I would be happy to work on a PR to fix it. I saw that other accelerate env vars are getting set from `training_args.py`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24871/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24871/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24870
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24870/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24870/comments
https://api.github.com/repos/huggingface/transformers/issues/24870/events
https://github.com/huggingface/transformers/issues/24870
1,808,712,700
I_kwDOCUB6oc5rzsP8
24,870
Amazon Bedrock model as HfAgent
{ "login": "austinmw", "id": 12224358, "node_id": "MDQ6VXNlcjEyMjI0MzU4", "avatar_url": "https://avatars.githubusercontent.com/u/12224358?v=4", "gravatar_id": "", "url": "https://api.github.com/users/austinmw", "html_url": "https://github.com/austinmw", "followers_url": "https://api.github.com/users/austinmw/followers", "following_url": "https://api.github.com/users/austinmw/following{/other_user}", "gists_url": "https://api.github.com/users/austinmw/gists{/gist_id}", "starred_url": "https://api.github.com/users/austinmw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/austinmw/subscriptions", "organizations_url": "https://api.github.com/users/austinmw/orgs", "repos_url": "https://api.github.com/users/austinmw/repos", "events_url": "https://api.github.com/users/austinmw/events{/privacy}", "received_events_url": "https://api.github.com/users/austinmw/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @austinmw Thank you for this feature request.\r\n\r\nI am not sure however, cc my colleague @sgugger who knows much better on the Agent topic!\r\n\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,689
1,693
1,693
NONE
null
### Feature request I'd like to be able to use Amazon Bedrock available models, for example Claude, as the HfAgent model. ### Motivation Expanded model support ### Your contribution Not sure currently.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24870/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24870/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24869
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24869/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24869/comments
https://api.github.com/repos/huggingface/transformers/issues/24869/events
https://github.com/huggingface/transformers/pull/24869
1,808,600,075
PR_kwDOCUB6oc5VtwaD
24,869
๐ŸŒ[i18n-KO] Translated `<debugging>.md`to Korean
{ "login": "kj021", "id": 106062329, "node_id": "U_kgDOBlJh-Q", "avatar_url": "https://avatars.githubusercontent.com/u/106062329?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kj021", "html_url": "https://github.com/kj021", "followers_url": "https://api.github.com/users/kj021/followers", "following_url": "https://api.github.com/users/kj021/following{/other_user}", "gists_url": "https://api.github.com/users/kj021/gists{/gist_id}", "starred_url": "https://api.github.com/users/kj021/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kj021/subscriptions", "organizations_url": "https://api.github.com/users/kj021/orgs", "repos_url": "https://api.github.com/users/kj021/repos", "events_url": "https://api.github.com/users/kj021/events{/privacy}", "received_events_url": "https://api.github.com/users/kj021/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24869). All of your documentation changes will be reflected on that endpoint.", "์†Œํ˜„๋‹˜์ด ๋ฆฌ๋ทฐ๋ฅผ ๊ผผ๊ผผํ•˜๊ฒŒ ํ•ด์ฃผ์…จ๋„ค์š”!\r\nLGTM ๐Ÿ‘", "์•ˆ๋…•ํ•˜์„ธ์š” @kj021 ๋‹˜, ํ˜น์‹œ ์‹œ๊ฐ„์ด ๋‚˜์‹ค๋•Œ ์œ„์˜ ์ˆ˜์ •์‚ฌํ•ญ๋“ค์„ ๋ฐ˜์˜ํ•ด์ฃผ์‹œ๊ธธ ๋ฐ”๋ž๋‹ˆ๋‹ค! ๊ฐ์‚ฌํ•ฉ๋‹ˆ๋‹ค.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,689
1,695
1,695
CONTRIBUTOR
null
<!-- PR์˜ ์ œ๋ชฉ์€ "๐ŸŒ [i18n-KO] Translated `<your_file>.md` to Korean" ์œผ๋กœ ๋ถ€ํƒ๋“œ๋ฆฝ๋‹ˆ๋‹ค --> # What does this PR do? Translated the `<debugging>.md` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 <!-- ๋ฉ”์ธ ์ด์Šˆ์— ๊ธฐ๋ก์ด ๋‚จ์•„์š”! ๊ฐ€์งœ์—ฐ๊ตฌ์†Œ ๋ฆฌํฌ๋ฅผ ์‚ฌ์šฉํ•ด ์—ฐ์Šตํ•˜์‹ค๋•Œ๋Š” ์ œ๊ฑฐํ•ด์ฃผ์‹œ๋ฉด ๊ฐ์‚ฌํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค! :smile: --> ## Before reviewing - [x] Check for missing / redundant translations (๋ฒˆ์—ญ ๋ˆ„๋ฝ/์ค‘๋ณต ๊ฒ€์‚ฌ) - [x] Grammar Check (๋งž์ถค๋ฒ• ๊ฒ€์‚ฌ) - [x] Review or Add new terms to glossary (์šฉ์–ด ํ™•์ธ ๋ฐ ์ถ”๊ฐ€) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview๋กœ ์ •์ƒ์ž‘๋™ ํ™•์ธ) ## Who can review? (Initial) <!-- 1. ์œ„ ์ฒดํฌ๊ฐ€ ๋ชจ๋‘ ์™„๋ฃŒ๋œ ๋’ค์—๋งŒ ๊ฐ€์งœ์—ฐ๊ตฌ์†Œ ํŒ€์›๋“ค์—๊ฒŒ ๋ฆฌ๋ทฐ ์š”์ฒญํ•˜๋Š” ์•„๋ž˜ ์ฃผ์„์„ ๋…ธ์ถœํ•ด์ฃผ์„ธ์š”! --> Team PseudoLab, may you please review this PR? @sronger, @TaeYupNoh, @kj021, @HanNayeoniee, @eenzeenee, @sim-so ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. ๊ฐ€์งœ์—ฐ๊ตฌ์†Œ ํŒ€์›๋“ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ€ ๋๋‚œ ํ›„์—๋งŒ ํ—ˆ๊น…ํŽ˜์ด์Šค ์ง์›๋“ค์—๊ฒŒ ๋ฆฌ๋ทฐ ์š”์ฒญํ•˜๋Š” ์•„๋ž˜ ์ฃผ์„์„ ๋…ธ์ถœํ•ด์ฃผ์„ธ์š”! --> <!-- @sgugger, @ArthurZucker, @eunseojo May you please review this PR? -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24869/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24869/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24869", "html_url": "https://github.com/huggingface/transformers/pull/24869", "diff_url": "https://github.com/huggingface/transformers/pull/24869.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24869.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24868
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24868/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24868/comments
https://api.github.com/repos/huggingface/transformers/issues/24868/events
https://github.com/huggingface/transformers/pull/24868
1,808,545,572
PR_kwDOCUB6oc5VtkKe
24,868
Remove `tests/onnx`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
COLLABORATOR
null
# What does this PR do? Remove `tests/onnx`, as mentioned, as discussed https://github.com/huggingface/transformers/pull/24800#issuecomment-1634822781 Note there are still some tests like `TFGPT2ModelTest::test_onnx_runtime_optimize` which are not removed in this PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24868/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24868/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24868", "html_url": "https://github.com/huggingface/transformers/pull/24868", "diff_url": "https://github.com/huggingface/transformers/pull/24868.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24868.patch", "merged_at": 1689626248000 }
https://api.github.com/repos/huggingface/transformers/issues/24867
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24867/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24867/comments
https://api.github.com/repos/huggingface/transformers/issues/24867/events
https://github.com/huggingface/transformers/pull/24867
1,808,458,045
PR_kwDOCUB6oc5VtUGi
24,867
Skip failing `ZeroShotAudioClassificationPipelineTests::test_small_model_pt` for now
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
COLLABORATOR
null
# What does this PR do? Skip failing `ZeroShotAudioClassificationPipelineTests::test_small_model_pt` for now. see [failing job](https://app.circleci.com/pipelines/github/huggingface/transformers/68367/workflows/0d616969-381a-4ce2-96f9-ec83b259df75/jobs/856192) likely a `datasets` issue
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24867/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24867/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24867", "html_url": "https://github.com/huggingface/transformers/pull/24867", "diff_url": "https://github.com/huggingface/transformers/pull/24867.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24867.patch", "merged_at": 1689623510000 }
https://api.github.com/repos/huggingface/transformers/issues/24866
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24866/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24866/comments
https://api.github.com/repos/huggingface/transformers/issues/24866/events
https://github.com/huggingface/transformers/issues/24866
1,808,402,563
I_kwDOCUB6oc5rygiD
24,866
ValueError: operands could not be broadcast together with shapes (60,4) (24,2,60,16,1024,64)
{ "login": "Luke-4", "id": 138615931, "node_id": "U_kgDOCEMcew", "avatar_url": "https://avatars.githubusercontent.com/u/138615931?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Luke-4", "html_url": "https://github.com/Luke-4", "followers_url": "https://api.github.com/users/Luke-4/followers", "following_url": "https://api.github.com/users/Luke-4/following{/other_user}", "gists_url": "https://api.github.com/users/Luke-4/gists{/gist_id}", "starred_url": "https://api.github.com/users/Luke-4/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Luke-4/subscriptions", "organizations_url": "https://api.github.com/users/Luke-4/orgs", "repos_url": "https://api.github.com/users/Luke-4/repos", "events_url": "https://api.github.com/users/Luke-4/events{/privacy}", "received_events_url": "https://api.github.com/users/Luke-4/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You should use the [forums](https://discuss.huggingface.co/) to debug your code as we keep issues for feature requests and bugs in the library only. Here it seems your `compute_metrics` function does not take the logits from the result of the model, which contains two arrays at least (the logits and some kind of hidden state)." ]
1,689
1,691
1,691
NONE
null
I am trying to fine-tune a model using the trainer API but I am getting this error: ![image](https://github.com/huggingface/transformers/assets/138615931/22a5ce44-a0ee-4fba-83f4-9d31c9ce76d4) I have looked online and I can't find anything similar to this, at least related to transformers and NPL. here is the code that I am using: https://colab.research.google.com/drive/1hdAG3rC1LHp7tJ4DCKbt90IDFsGcubHf?usp=sharing
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24866/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24866/timeline
not_planned
null
null
https://api.github.com/repos/huggingface/transformers/issues/24865
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24865/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24865/comments
https://api.github.com/repos/huggingface/transformers/issues/24865/events
https://github.com/huggingface/transformers/pull/24865
1,808,357,147
PR_kwDOCUB6oc5Vs9mG
24,865
Skip Add model like job
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
COLLABORATOR
null
# What does this PR do? The Add model like job has been failing for a mysterious reason since this morning. I suggest skipping it for now and re-enabling it once it is fixed.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24865/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24865/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24865", "html_url": "https://github.com/huggingface/transformers/pull/24865", "diff_url": "https://github.com/huggingface/transformers/pull/24865.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24865.patch", "merged_at": 1689623524000 }
https://api.github.com/repos/huggingface/transformers/issues/24864
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24864/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24864/comments
https://api.github.com/repos/huggingface/transformers/issues/24864/events
https://github.com/huggingface/transformers/pull/24864
1,808,284,952
PR_kwDOCUB6oc5VstnY
24,864
Fix the fetch of all example tests
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24864). All of your documentation changes will be reflected on that endpoint." ]
1,689
1,689
1,689
COLLABORATOR
null
# What does this PR do? I noticed on recent PRs that when all tests are fetched, the example tests are not run. Upon closer inspection, it's because the test for `"all"` in the `test_fetcher` compares a list instead of a string. This PR addresses that.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24864/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24864/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24864", "html_url": "https://github.com/huggingface/transformers/pull/24864", "diff_url": "https://github.com/huggingface/transformers/pull/24864.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24864.patch", "merged_at": 1689617413000 }
https://api.github.com/repos/huggingface/transformers/issues/24863
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24863/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24863/comments
https://api.github.com/repos/huggingface/transformers/issues/24863/events
https://github.com/huggingface/transformers/pull/24863
1,808,215,493
PR_kwDOCUB6oc5VseMW
24,863
deprecate no_cuda
{ "login": "SunMarc", "id": 57196510, "node_id": "MDQ6VXNlcjU3MTk2NTEw", "avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SunMarc", "html_url": "https://github.com/SunMarc", "followers_url": "https://api.github.com/users/SunMarc/followers", "following_url": "https://api.github.com/users/SunMarc/following{/other_user}", "gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}", "starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions", "organizations_url": "https://api.github.com/users/SunMarc/orgs", "repos_url": "https://api.github.com/users/SunMarc/repos", "events_url": "https://api.github.com/users/SunMarc/events{/privacy}", "received_events_url": "https://api.github.com/users/SunMarc/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
MEMBER
null
# What does this PR do? This PR deprecates the `no_cuda` arg because it is confusing for Mac users as their models get dispatched to `mps` device when `no_cuda=False`. If they want to train the model on cpu, they need to set `no_cuda=True` which is not intuitive. We rename it `use_cpu` instead. Related issue #24697
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24863/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24863/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24863", "html_url": "https://github.com/huggingface/transformers/pull/24863", "diff_url": "https://github.com/huggingface/transformers/pull/24863.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24863.patch", "merged_at": 1689619949000 }
https://api.github.com/repos/huggingface/transformers/issues/24862
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24862/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24862/comments
https://api.github.com/repos/huggingface/transformers/issues/24862/events
https://github.com/huggingface/transformers/pull/24862
1,808,192,854
PR_kwDOCUB6oc5VsZJ0
24,862
Fix token pass
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24862). All of your documentation changes will be reflected on that endpoint." ]
1,689
1,689
1,689
COLLABORATOR
null
# What does this PR do? The `token` passed along in `PreTrainedTokenizerBase.from_pretrained` is passed along twice at the end: one time in the kwargs and one time as `use_auth_token`. This caused the speech examples to fail.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24862/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24862/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24862", "html_url": "https://github.com/huggingface/transformers/pull/24862", "diff_url": "https://github.com/huggingface/transformers/pull/24862.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24862.patch", "merged_at": 1689614832000 }
https://api.github.com/repos/huggingface/transformers/issues/24861
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24861/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24861/comments
https://api.github.com/repos/huggingface/transformers/issues/24861/events
https://github.com/huggingface/transformers/pull/24861
1,808,082,595
PR_kwDOCUB6oc5VsA7E
24,861
fix broken links in READMEs
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? Currently foreign languages READMEs have broken links, this PR fixes it cc @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24861/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24861/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24861", "html_url": "https://github.com/huggingface/transformers/pull/24861", "diff_url": "https://github.com/huggingface/transformers/pull/24861.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24861.patch", "merged_at": 1689612434000 }
https://api.github.com/repos/huggingface/transformers/issues/24860
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24860/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24860/comments
https://api.github.com/repos/huggingface/transformers/issues/24860/events
https://github.com/huggingface/transformers/issues/24860
1,808,070,386
I_kwDOCUB6oc5rxPby
24,860
Model parameters don't update with deepspeed integration
{ "login": "avivbrokman", "id": 35349273, "node_id": "MDQ6VXNlcjM1MzQ5Mjcz", "avatar_url": "https://avatars.githubusercontent.com/u/35349273?v=4", "gravatar_id": "", "url": "https://api.github.com/users/avivbrokman", "html_url": "https://github.com/avivbrokman", "followers_url": "https://api.github.com/users/avivbrokman/followers", "following_url": "https://api.github.com/users/avivbrokman/following{/other_user}", "gists_url": "https://api.github.com/users/avivbrokman/gists{/gist_id}", "starred_url": "https://api.github.com/users/avivbrokman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/avivbrokman/subscriptions", "organizations_url": "https://api.github.com/users/avivbrokman/orgs", "repos_url": "https://api.github.com/users/avivbrokman/repos", "events_url": "https://api.github.com/users/avivbrokman/events{/privacy}", "received_events_url": "https://api.github.com/users/avivbrokman/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I apologize if this is actually a deepspeed issue rather than a deepspeed integration issueโ€”I'm having trouble parsing which option is the case.", "I think it is expected behaviour, because Zero3 partitions the model weights. You should use `deepspeed.zero.GatheredParameters` context manager or you can check the partitioned parameters stored in the `param.ds_tensor` attribute. To prove it, you can check your `model.encoder.block[0].layer[0].SelfAttention.q.weight.data.shape`, it should be empty.", "@1ytic, you're rightโ€”it was empty. It's not clear to me how to use the suggestions you gave meโ€”can you provide a little more detail?\r\n", "Try to use this tensor `model.encoder.block[0].layer[0].SelfAttention.q.weight.ds_tensor.clone()` in your `MonitorParameterCallback`.", "Thanks! It turns out, the issue was due to NaN loss from fp16. ", "Hi @avivbrokman, did you fix the problem? Do you mind sharing how you fixed it? Thanks ", "Hi @avivbrokman, I met a similar problem when using zero3, the model parameters that are supposed to update does not update when I print them in forward()" ]
1,689
1,699
1,689
NONE
null
### System Info - `transformers` version: 4.31.0.dev0 - Platform: Linux-5.10.173-154.642.amzn2.x86_64-x86_64-with-glibc2.26 - Python version: 3.10.10 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ### Who can help? @pacman100 ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction (1) I noticed that performance wasn't improving over training epochs, so I added in the following code to `examples/pytorch/translation/run_translate.py` at line 250 in order to monitor parameter value changes: ``` from transformers import TrainerCallback class MonitorParameterCallback(TrainerCallback): def on_train_begin(self, args, state, control, model, **kwargs): self.original_value = model.encoder.block[0].layer[0].SelfAttention.q.weight.clone() def on_epoch_end(self, args, state, control, model, **kwargs): new_value = model.encoder.block[0].layer[0].SelfAttention.q.weight.clone() change = new_value - self.original_value change_norm = float(change.square().sum()) print('change = ', change_norm) ``` However, any other method of confirming parameter changes would suffice. (2) The following code (no deepspeed) prints out positive values for training loss and change in parameter values: ``` python examples/pytorch/translation/run_translation.py \ --model_name_or_path t5-small \ --do_train \ --source_lang en \ --target_lang ro \ --dataset_name wmt16 \ --dataset_config_name ro-en \ --output_dir /tmp/tst-translation \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate \ --max_train_samples 16 ``` (3) The following code (with deepspeed) prints out positive values for training loss but 0 for change in parameter values: ``` deepspeed examples/pytorch/translation/run_translation.py \ --model_name_or_path t5-small \ --do_train \ --source_lang en \ --target_lang ro \ --dataset_name wmt16 \ --dataset_config_name ro-en \ --output_dir /tmp/tst-translation \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate \ --max_train_samples 16 \ --deepspeed tests/deepspeed/ds_config_zero3.json ``` ### Expected behavior When training with deepspeed, parameter values of the model should update.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24860/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24860/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24859
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24859/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24859/comments
https://api.github.com/repos/huggingface/transformers/issues/24859/events
https://github.com/huggingface/transformers/pull/24859
1,808,037,437
PR_kwDOCUB6oc5Vr3Li
24,859
Add TAPEX to the list of deprecated models
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24859). All of your documentation changes will be reflected on that endpoint." ]
1,689
1,689
1,689
COLLABORATOR
null
# What does this PR do? TAPEX was not in the list of deprecated models, leading to importing it with the auto API not working. I'll make a script to check the content of that constant is in sync with the content of the deprecated folder so this doesn't happen again. Fixes #24852
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24859/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24859/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24859", "html_url": "https://github.com/huggingface/transformers/pull/24859", "diff_url": "https://github.com/huggingface/transformers/pull/24859.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24859.patch", "merged_at": 1689612783000 }
https://api.github.com/repos/huggingface/transformers/issues/24858
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24858/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24858/comments
https://api.github.com/repos/huggingface/transformers/issues/24858/events
https://github.com/huggingface/transformers/pull/24858
1,807,915,993
PR_kwDOCUB6oc5VrctI
24,858
Ra
{ "login": "jamesthesnake", "id": 8227820, "node_id": "MDQ6VXNlcjgyMjc4MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/8227820?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jamesthesnake", "html_url": "https://github.com/jamesthesnake", "followers_url": "https://api.github.com/users/jamesthesnake/followers", "following_url": "https://api.github.com/users/jamesthesnake/following{/other_user}", "gists_url": "https://api.github.com/users/jamesthesnake/gists{/gist_id}", "starred_url": "https://api.github.com/users/jamesthesnake/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jamesthesnake/subscriptions", "organizations_url": "https://api.github.com/users/jamesthesnake/orgs", "repos_url": "https://api.github.com/users/jamesthesnake/repos", "events_url": "https://api.github.com/users/jamesthesnake/events{/privacy}", "received_events_url": "https://api.github.com/users/jamesthesnake/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,689
1,689
1,689
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24858/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24858/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24858", "html_url": "https://github.com/huggingface/transformers/pull/24858", "diff_url": "https://github.com/huggingface/transformers/pull/24858.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24858.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24857
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24857/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24857/comments
https://api.github.com/repos/huggingface/transformers/issues/24857/events
https://github.com/huggingface/transformers/issues/24857
1,807,753,063
I_kwDOCUB6oc5rwB9n
24,857
Everything CLIP related seems to break starting form transformers 4.28.0
{ "login": "andreaferretti", "id": 1962818, "node_id": "MDQ6VXNlcjE5NjI4MTg=", "avatar_url": "https://avatars.githubusercontent.com/u/1962818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/andreaferretti", "html_url": "https://github.com/andreaferretti", "followers_url": "https://api.github.com/users/andreaferretti/followers", "following_url": "https://api.github.com/users/andreaferretti/following{/other_user}", "gists_url": "https://api.github.com/users/andreaferretti/gists{/gist_id}", "starred_url": "https://api.github.com/users/andreaferretti/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andreaferretti/subscriptions", "organizations_url": "https://api.github.com/users/andreaferretti/orgs", "repos_url": "https://api.github.com/users/andreaferretti/repos", "events_url": "https://api.github.com/users/andreaferretti/events{/privacy}", "received_events_url": "https://api.github.com/users/andreaferretti/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Just to make something reproducible, here we can see that the output of CLIPProcessor changes. I run the script\r\n\r\n```python\r\nfrom PIL import Image\r\nimport requests\r\n\r\nimport transformers\r\nfrom torchvision.transforms.functional import to_tensor\r\nfrom transformers import CLIPProcessor\r\n\r\nprocessor = CLIPProcessor.from_pretrained(\"openai/clip-vit-large-patch14\")\r\n\r\nurl = \"http://images.cocodataset.org/val2017/000000039769.jpg\"\r\nimage = Image.open(requests.get(url, stream=True).raw)\r\nreference = to_tensor(image)\r\n\r\nencoded_data = processor(\r\n text=[\"\"],\r\n images=[reference],\r\n return_tensors=\"pt\",\r\n max_length=77,\r\n padding=\"max_length\",\r\n truncation=True,\r\n)\r\n\r\nprint(transformers.__version__)\r\nprint(encoded_data.pixel_values.mean())\r\n```\r\n\r\nWith 4.27.4 I get\r\n\r\n```\r\n4.27.4\r\ntensor(0.2463)\r\n```\r\n\r\nWith 4.28.0 I get\r\n\r\n```\r\n4.28.0\r\ntensor(-1.6673)\r\n```", "I figured out the issue: the CLIPProcessor expects tensors in the range [0, 255], but only starting from transformers 4.28.0. This seems a pretty breaking change to me! If I multiply my tensor by 255, I get the right results", "Hi,\r\n\r\nThanks for reporting. This seems related to https://github.com/huggingface/transformers/issues/23096 and may be caused by https://github.com/huggingface/transformers/pull/22458. cc @amyeroberts ", "Hi @andreaferretti, thanks for raising this issue! \r\n\r\nWhat's being observed, is actually a resolution of inconsistent behaviour of the previous CLIP feature extractors. I'll explain: \r\n* to_tensor() doesn't just convert to a pytorch tensor, it also rescales the values to be between 0 - 1\r\n* The deprecated feature extractors and image processors use Pillow for resizing their images. \r\n* Pillow requires that for RGB, pixel values are uint8 between 0-255. \r\n* Therefore input images with float values are upscaled and cast to uint8 before being converted to a PIL.Image.Image\r\n\r\nIn the previous behaviour, images after resizing kept their upscaled values. Currently, if an image was upscaled during resizing, the pixel values are downscaled back e.g. to between 0-1. This ensures that the user can set `do_resize` to `True` or `False` and the only difference in the output image is its size (and interpolated pixels). Previously, if you set `do_resize=False`, then your image pixel values are never upscaled, they remain between 0-1, would be downscaled again, as is happening now. \r\n\r\nRather than try to infer processor behaviour based on inputs, we keep the processing behaviour consistent and let the user explicitly control this. If you wish to input images with pixel values that have been downscaled, then you just need to tell the image processor not to do any additional scaling using the `do_rescale` flag:\r\n\r\n```py\r\noutputs = image_processor(images, do_rescale=False)\r\n```\r\n\r\nAlternatively, you could pass in the images without calling `to_tensor`. \r\n\r\nIn the issues linked by @NielsRogge, this is also explained: https://github.com/huggingface/transformers/issues/23096#issuecomment-1557699476\r\n\r\nHowever, this is the second time a similar issue has been raised, indicating that the behaviour is unexpected. I'll think about how to best address this with documentation or possible warning within the code. ", "Yeah, it would be useful to add a warning mentioning `do_rescale`, as well as mention this issue in the documentation of CLIP and related models", "I am still getting widely different results on the JAX implementation of [`scenic`](https://github.com/google-research/scenic) for CLIP and the one we have in `transformers` (PyTorch).\r\n\r\n```python\r\nfrom transformers import CLIPImageProcessor, CLIPVisionModelWithProjection\r\nimport torch\r\nfrom PIL import Image \r\n\r\nimport jax\r\nimport numpy as np\r\nfrom scenic.projects.baselines.clip import model as clip\r\n\r\n\r\ndef _clip_preprocess(images, size):\r\n target_shape = images.shape[:-3] + (size, size, images.shape[-1])\r\n images = jax.image.resize(images, shape=target_shape, method='bicubic')\r\n images = clip.normalize_image(images)\r\n\r\n return images\r\n\r\ndef get_image_in_format(image, size, format=\"pt\"):\r\n images = np.array(image) / 255.\r\n images = np.expand_dims(images, 0)\r\n pp_images = _clip_preprocess(images, size)\r\n\r\n if format == \"pt\":\r\n inputs = {}\r\n inputs[\"pixel_values\"] = torch.from_numpy(np.array(pp_images))\r\n inputs[\"pixel_values\"] = inputs[\"pixel_values\"].permute(0, 3, 1, 2)\r\n return inputs \r\n\r\n inputs = pp_images\r\n return inputs\r\n\r\n# Comes from https://huggingface.co/datasets/diffusers/docs-images/blob/main/amused/glowing_512_2.png\r\nimage = Image.open(\"glowing_512_2.png\")\r\nprocessor = CLIPImageProcessor.from_pretrained(\"openai/clip-vit-large-patch14-336\")\r\nmodel = CLIPVisionModelWithProjection.from_pretrained(\"openai/clip-vit-large-patch14-336\").eval()\r\n\r\ninputs = get_image_in_format(image, processor.crop_size[\"height\"], format=\"pt\")\r\nwith torch.no_grad():\r\n output = model(**inputs)\r\n\r\ntemp = output.image_embeds[0, :4].numpy().flatten().tolist()\r\nprint(\", \".join([str(f\"{x:.4f}\") for x in temp]))\r\nprint(\"=====Printing JAX model=====\")\r\n\r\n\r\n_CLIP_MODEL_NAME = 'vit_l14_336px'\r\n_model = clip.MODELS[_CLIP_MODEL_NAME]()\r\n_model_vars = clip.load_model_vars(_CLIP_MODEL_NAME)\r\ninput_image_size = clip.IMAGE_RESOLUTION[_CLIP_MODEL_NAME]\r\n\r\nimages = get_image_in_format(image, size=input_image_size, format=\"jax\")\r\ntemp = np.asarray(image_embs[0, :4]).flatten().tolist()\r\nprint(\", \".join([str(f\"{x:.4f}\") for x in temp]))\r\n```\r\n\r\nGives:\r\n\r\n```\r\n-0.0898, 0.1304, 0.2402, -0.0378\r\n=====Printing JAX model=====\r\n-0.0046, 0.0068, 0.0124, -0.0020\r\n```\r\n\r\nfor what seems to be quite different for the exact same input. \r\n\r\n@sanchit-gandhi would you have a clue about it? ", "Hi,\r\n\r\nNot sure if you're comparing apples-to-apples, when comparing the original CLIP repository to the Transformers one, they match: https://colab.research.google.com/drive/15ZhC32ovBKAU5JqC-kcIOntW_oU-JrkB?usp=sharing.\r\n\r\nScenic is not the original implementation of CLIP so there might be some differences. I would first check whether the Scenic implementation outputs the same logits as the OpenAI CLIP repository.", "You are right:\r\n\r\n```python\r\nimport clip\r\nimport torch \r\nimport jax\r\nimport numpy as np\r\nfrom scenic.projects.baselines.clip import model as clip_scenic\r\n\r\ninputs = np.random.randn(1, 336, 336, 3)\r\nmodel, preprocess = clip.load(\"ViT-L/14@336px\", device=\"cpu\")\r\n\r\nwith torch.no_grad():\r\n image = torch.from_numpy(inputs.transpose(0, 3, 1, 2))\r\n image_features = model.encode_image(image).numpy()\r\n print(image_features.shape)\r\n\r\ntemp = image_features[0, :4].flatten().tolist()\r\nprint(\", \".join([str(f\"{x:.4f}\") for x in temp]))\r\nprint(\"=====Printing JAX model=====\")\r\n\r\n_CLIP_MODEL_NAME = 'vit_l14_336px'\r\n_model = clip_scenic.MODELS[_CLIP_MODEL_NAME]()\r\n_model_vars = clip_scenic.load_model_vars(_CLIP_MODEL_NAME)\r\n\r\nimages = jax.numpy.array(inputs)\r\nimage_embs, _ = _model.apply(_model_vars, images, None)\r\nprint(image_embs.shape)\r\ntemp = np.asarray(image_embs[0, :4]).flatten().tolist()\r\nprint(\", \".join([str(f\"{x:.4f}\") for x in temp]))\r\n```\r\n\r\nGives:\r\n\r\n```bash\r\n(1, 768)\r\n-0.1827, 0.7319, 0.8779, 0.4829\r\n=====Printing JAX model=====\r\n(1, 768)\r\n-0.0107, 0.0429, 0.0514, 0.0283\r\n```\r\n\r\nSorry for the false alarm here. Have raised an issue: https://github.com/google-research/scenic/issues/991. " ]
1,689
1,706
1,694
NONE
null
### System Info - `transformers` version: 4.28.0 - Platform: Linux-5.10.107+-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 1.11.0+cu113 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @amyeroberts ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction It seems to me that there is some regression starting from transformers 4.28.0 that affects the CLIP vision model and everything related to it. In particular, I am having issue with * ClipSeg * the CLIPVisionModel proper. # ClipSeg For ClipSeg, I am able to use it and get the expected masks, essentially by literally following the example [here](https://huggingface.co/docs/transformers/model_doc/clipseg#transformers.CLIPSegForImageSegmentation): ```python from transformers import AutoProcessor, CLIPSegForImageSegmentation from PIL import Image import requests processor = AutoProcessor.from_pretrained("CIDAS/clipseg-rd64-refined") model = CLIPSegForImageSegmentation.from_pretrained("CIDAS/clipseg-rd64-refined") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) texts = ["a cat", "a remote", "a blanket"] inputs = processor(text=texts, images=[image] * len(texts), padding=True, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits print(logits.shape) ``` Then `logits` contains the logits from which I can obtain a mask by something like ```python mask = torch.exp(logits) mask /= mask.max() ``` I tested this and it works reliably until transformers 4.27.4. But with transformers 4.28.0, I get masks that are completely black regardless of the input image. # ClipVisionModel This is harder to describe, since it relies on an internal model. I have trained a model that makes use of the image embeddings generated by ClipVisionModel for custom subject generation. Everything works well until transformers 4.27.4. If I switch to 4.28.0, the generated image changes completely. The only change is installing 4.28.0. In fact, if I save the embeddings generated by CLIPVisionModel with the two different versions for any random image, I see that they are different. to be sure, this is how I generate image embeddings: ```python clip = CLIPModel.from_pretrained(...) preprocessor = CLIPProcessor.from_pretrained(...) ... encoded_data = preprocessor( text=prompts, images=images, return_tensors="pt", max_length=77, padding="max_length", truncation=True, ) clip_output = clip( input_ids=encoded_data.input_ids, pixel_values=encoded_data.pixel_values, ) image_embeds =clip.visual_projection( clip_output.vision_model_output.last_hidden_state ) ``` For reference, I am using clip-vit-large-patch14 ### Expected behavior I would expect CLIPVisionModel to give the same result on the same image, both in 4.27.4 and in 4.28.0
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24857/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24857/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24856
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24856/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24856/comments
https://api.github.com/repos/huggingface/transformers/issues/24856/events
https://github.com/huggingface/transformers/pull/24856
1,807,729,261
PR_kwDOCUB6oc5Vqzm7
24,856
Replace assert statements with exceptions
{ "login": "syedsalman137", "id": 98826220, "node_id": "U_kgDOBeP37A", "avatar_url": "https://avatars.githubusercontent.com/u/98826220?v=4", "gravatar_id": "", "url": "https://api.github.com/users/syedsalman137", "html_url": "https://github.com/syedsalman137", "followers_url": "https://api.github.com/users/syedsalman137/followers", "following_url": "https://api.github.com/users/syedsalman137/following{/other_user}", "gists_url": "https://api.github.com/users/syedsalman137/gists{/gist_id}", "starred_url": "https://api.github.com/users/syedsalman137/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/syedsalman137/subscriptions", "organizations_url": "https://api.github.com/users/syedsalman137/orgs", "repos_url": "https://api.github.com/users/syedsalman137/repos", "events_url": "https://api.github.com/users/syedsalman137/events{/privacy}", "received_events_url": "https://api.github.com/users/syedsalman137/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? I have replaced the assert statements with appropriate exceptions in the directory `src/transformers/models/` with all models beginning with `a` and `b` letters. Also, I have corrected error handling at places where except statements were handling AssertionError, even thought it was never to be raised. Here is an example: ``` try: if pointer.shape != array.shape: raise ValueError(f"Pointer shape {pointer.shape} and array shape {array.shape} mismatched") except AssertionError as e: # Incorrect line e.args += (pointer.shape, array.shape) raise ``` I changed the above to: ``` try: if pointer.shape != array.shape: raise ValueError(f"Pointer shape {pointer.shape} and array shape {array.shape} mismatched") except ValueError as e: # Corrected the line e.args += (pointer.shape, array.shape) raise ``` Fixes #12789 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @ArthurZucker @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24856/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24856/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24856", "html_url": "https://github.com/huggingface/transformers/pull/24856", "diff_url": "https://github.com/huggingface/transformers/pull/24856.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24856.patch", "merged_at": 1689618764000 }
https://api.github.com/repos/huggingface/transformers/issues/24855
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24855/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24855/comments
https://api.github.com/repos/huggingface/transformers/issues/24855/events
https://github.com/huggingface/transformers/pull/24855
1,807,728,271
PR_kwDOCUB6oc5VqzZD
24,855
Fix comments for `_merge_heads`
{ "login": "bofenghuang", "id": 38185248, "node_id": "MDQ6VXNlcjM4MTg1MjQ4", "avatar_url": "https://avatars.githubusercontent.com/u/38185248?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bofenghuang", "html_url": "https://github.com/bofenghuang", "followers_url": "https://api.github.com/users/bofenghuang/followers", "following_url": "https://api.github.com/users/bofenghuang/following{/other_user}", "gists_url": "https://api.github.com/users/bofenghuang/gists{/gist_id}", "starred_url": "https://api.github.com/users/bofenghuang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bofenghuang/subscriptions", "organizations_url": "https://api.github.com/users/bofenghuang/orgs", "repos_url": "https://api.github.com/users/bofenghuang/repos", "events_url": "https://api.github.com/users/bofenghuang/events{/privacy}", "received_events_url": "https://api.github.com/users/bofenghuang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24855/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24855/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24855", "html_url": "https://github.com/huggingface/transformers/pull/24855", "diff_url": "https://github.com/huggingface/transformers/pull/24855.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24855.patch", "merged_at": 1689606437000 }
https://api.github.com/repos/huggingface/transformers/issues/24854
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24854/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24854/comments
https://api.github.com/repos/huggingface/transformers/issues/24854/events
https://github.com/huggingface/transformers/pull/24854
1,807,689,851
PR_kwDOCUB6oc5Vqq2H
24,854
[BLIP-2] Improve conversion script
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Opened PRs on the respective repos to update the layer norm eps:\r\n\r\n- https://huggingface.co/Salesforce/blip2-flan-t5-xl/discussions/1\r\n- https://huggingface.co/Salesforce/blip2-flan-t5-xxl/discussions/6\r\n- https://huggingface.co/Salesforce/blip2-flan-t5-xl-coco/discussions/2\r\n- https://huggingface.co/Salesforce/blip2-opt-2.7b/discussions/14\r\n- https://huggingface.co/Salesforce/blip2-opt-2.7b-coco/discussions/1\r\n- https://huggingface.co/Salesforce/blip2-opt-6.7b/discussions/4\r\n- https://huggingface.co/Salesforce/blip2-opt-6.7b-coco/discussions/1\r\n\r\nPR itself can be merged already.", "Feel free to merge :)" ]
1,689
1,694
1,694
CONTRIBUTOR
null
# What does this PR do? When investigating an issue reported [here](https://github.com/salesforce/LAVIS/issues/418), I've reran and improved BLIP-2's conversion script (based on InstructBLIP). It's important to compare apples-to-apples, so I had to fork the LAVIS repo and make sure the original model is also run in float32.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24854/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24854/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24854", "html_url": "https://github.com/huggingface/transformers/pull/24854", "diff_url": "https://github.com/huggingface/transformers/pull/24854.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24854.patch", "merged_at": 1694716941000 }
https://api.github.com/repos/huggingface/transformers/issues/24853
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24853/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24853/comments
https://api.github.com/repos/huggingface/transformers/issues/24853/events
https://github.com/huggingface/transformers/pull/24853
1,807,636,221
PR_kwDOCUB6oc5VqfNM
24,853
Fix `is_vision_available`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
COLLABORATOR
null
# What does this PR do? Fix #24845 After #23163, we need an extra check if we want to support the use of `pillow-simd`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24853/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24853/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24853", "html_url": "https://github.com/huggingface/transformers/pull/24853", "diff_url": "https://github.com/huggingface/transformers/pull/24853.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24853.patch", "merged_at": 1689605931000 }
https://api.github.com/repos/huggingface/transformers/issues/24852
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24852/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24852/comments
https://api.github.com/repos/huggingface/transformers/issues/24852/events
https://github.com/huggingface/transformers/issues/24852
1,807,473,617
I_kwDOCUB6oc5ru9vR
24,852
Loading model microsoft/tapex-base-finetuned-wtq failed with error No module named 'transformers.models.tapex'
{ "login": "WeichenXu123", "id": 19235986, "node_id": "MDQ6VXNlcjE5MjM1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/19235986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/WeichenXu123", "html_url": "https://github.com/WeichenXu123", "followers_url": "https://api.github.com/users/WeichenXu123/followers", "following_url": "https://api.github.com/users/WeichenXu123/following{/other_user}", "gists_url": "https://api.github.com/users/WeichenXu123/gists{/gist_id}", "starred_url": "https://api.github.com/users/WeichenXu123/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/WeichenXu123/subscriptions", "organizations_url": "https://api.github.com/users/WeichenXu123/orgs", "repos_url": "https://api.github.com/users/WeichenXu123/repos", "events_url": "https://api.github.com/users/WeichenXu123/events{/privacy}", "received_events_url": "https://api.github.com/users/WeichenXu123/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I'm unable to reproduce this on my side. Are you sure you are using the latest commit from main?", "@sgugger \r\n\r\nYes. I installed it by:\r\n`pip install git+https://github.com/huggingface/transformers`\r\n\r\nand then in python REPL, run:\r\n```\r\nimport transformers\r\n\r\ntransformers.pipeline(\r\n task=\"table-question-answering\", model=\"microsoft/tapex-base-finetuned-wtq\"\r\n)\r\n```\r\n\r\n\r\nOur MLflow CI (run against transformer master) has the same error:\r\nhttps://github.com/mlflow/mlflow/actions/runs/5567846085/jobs/10170056430#step:15:526\r\n", "@sgugger Did you run it on ubuntu system ? ", "I had some `__pycache__` remaining in `models/tapex` so it didn't error, but after cleaning that folder I can reproduce. Having a look, the fix should come this afternoon.", "It was actually fairly easy to fix. Could you quickly check the PR above solves the issue for you too?" ]
1,689
1,689
1,689
NONE
null
### System Info OS: Ubuntu 22.04 Python 3.9 Packages installed: ``` pip install git+https://github.com/huggingface/transformers pip install datasets huggingface_hub torch torchvision tensorflow accelerate librosa ffmpeg ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Code ``` import transformers transformers.pipeline( task="table-question-answering", model="microsoft/tapex-base-finetuned-wtq" ) ``` raises error: ``` ModuleNotFoundError: No module named 'transformers.models.tapex' ``` ### Expected behavior Returns `transformers.pipelines.table_question_answering.TableQuestionAnsweringPipeline` object
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24852/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24852/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24851
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24851/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24851/comments
https://api.github.com/repos/huggingface/transformers/issues/24851/events
https://github.com/huggingface/transformers/issues/24851
1,807,310,941
I_kwDOCUB6oc5ruWBd
24,851
save_pretrained 4bits/8bits model
{ "login": "jameswu2014", "id": 8462999, "node_id": "MDQ6VXNlcjg0NjI5OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/8462999?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jameswu2014", "html_url": "https://github.com/jameswu2014", "followers_url": "https://api.github.com/users/jameswu2014/followers", "following_url": "https://api.github.com/users/jameswu2014/following{/other_user}", "gists_url": "https://api.github.com/users/jameswu2014/gists{/gist_id}", "starred_url": "https://api.github.com/users/jameswu2014/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jameswu2014/subscriptions", "organizations_url": "https://api.github.com/users/jameswu2014/orgs", "repos_url": "https://api.github.com/users/jameswu2014/repos", "events_url": "https://api.github.com/users/jameswu2014/events{/privacy}", "received_events_url": "https://api.github.com/users/jameswu2014/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @younesbelkada and @SunMarc ", "hi @jameswu2014 \r\nThanks for the issue, what is the transformers version you are using? Can you try again with the latest version of `transformers`? Also can you share the full traceback and a reproducible handy snippet? Thanks", "having the same issue, believe because there are string values in the state dicts, that are not really tensors", "Hi @psinger,\r\nI just checked our daily CI and we do have a CI test to check int8 serialization works correctly here: https://github.com/huggingface/transformers/blob/main/tests/bnb/test_mixed_int8.py#L286-L311 and the test is passing.\r\nI believe maybe this PR: https://github.com/huggingface/transformers/pull/24416 fixed your issues, can you try to install transformers from source and use the latest version of `bitsandbytes` ?\r\n\r\n```bash\r\npip uninstall transformers bitsandbytes\r\npip install git+https://github.com/huggingface/transformers.git\r\npip install --upgrade bitsandbytes\r\n```", "Thanks, I actually misspoke, my error is:\r\n`AttributeError: 'str' object has no attribute 'device'\r\n`\r\n\r\nInstalling transformers from source indeed seems to solve it.", "> Hi @psinger, I just checked our daily CI and we do have a CI test to check int8 serialization works correctly here: https://github.com/huggingface/transformers/blob/main/tests/bnb/test_mixed_int8.py#L286-L311 and the test is passing. I believe maybe this PR: #24416 fixed your issues, can you try to install transformers from source and use the latest version of `bitsandbytes` ?\r\n> \r\n> ```shell\r\n> pip uninstall transformers bitsandbytes\r\n> pip install git+https://github.com/huggingface/transformers.git\r\n> pip install --upgrade bitsandbytes\r\n> ```\r\n\r\nWhether 4bits supported? I need both 4bits and 8bits saved.", "Hi @jameswu2014 \r\nThanks for the heads up, as stated above, currently 4bit saving is not supported yet, feel free to raise an issue on bitsandbytes repository to request this feature", "Is there a work-around for 4-bit models? Can I convert the model to something like float16 and then save it? Or is 4-bit fine-tuning not really usable? Thanks.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Closing as this is not related with transformers but bitsandbytes, please track the issue: https://github.com/TimDettmers/bitsandbytes/issues/695 " ]
1,689
1,692
1,692
NONE
null
### System Info I have a BitsAndBytes Quantization 4/8bitsmodels๏ผŒHow to save it? I invoked save_pretrained api๏ผŒhowever๏ผŒ I get error๏ผšAttributeError: 'str' object has no attribute 'numel'ใ€‚ ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction bnb_config = BitsAndBytesConfig() bnb_config.load_in_8bit = True model = AutoModelForCausalLM.from_pretrained( model_path, load_in_8bit=True, device_map="auto", trust_remote_code=True ).eval() model.save_pretrained("BitsAndBytesQuant8/") ### Expected behavior saved successfully
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24851/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24851/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24850
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24850/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24850/comments
https://api.github.com/repos/huggingface/transformers/issues/24850/events
https://github.com/huggingface/transformers/issues/24850
1,807,293,745
I_kwDOCUB6oc5ruR0x
24,850
I used a trainer to pretraining a BertForMaskedLM model, but the training loss always be zero
{ "login": "ryanforhoon", "id": 78296960, "node_id": "MDQ6VXNlcjc4Mjk2OTYw", "avatar_url": "https://avatars.githubusercontent.com/u/78296960?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ryanforhoon", "html_url": "https://github.com/ryanforhoon", "followers_url": "https://api.github.com/users/ryanforhoon/followers", "following_url": "https://api.github.com/users/ryanforhoon/following{/other_user}", "gists_url": "https://api.github.com/users/ryanforhoon/gists{/gist_id}", "starred_url": "https://api.github.com/users/ryanforhoon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ryanforhoon/subscriptions", "organizations_url": "https://api.github.com/users/ryanforhoon/orgs", "repos_url": "https://api.github.com/users/ryanforhoon/repos", "events_url": "https://api.github.com/users/ryanforhoon/events{/privacy}", "received_events_url": "https://api.github.com/users/ryanforhoon/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Please use the [forums](https://discuss.huggingface.co/) to debug such issues in your code :-)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,689
1,692
1,692
NONE
null
### System Info > transformers==4.28.0 ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I used a trainer to pretraining a BertForMaskedLM model, but the training loss always be zero ``` config = BertConfig(vocab_size=40000,num_hidden_layers=6,) model = BertForMaskedLM(config) print('Number of parameters: ', model.num_parameters()) pretrained_models_path = "my path" training_args = TrainingArguments( output_dir=pretrained_models_path, overwrite_output_dir=True, per_device_train_batch_size=32, num_train_epochs=10, save_steps=10000, save_total_limit=2, prediction_loss_only = True, fp16=True, ) trainer = Trainer( args=training_args, train_dataset=train_dataset, data_collator=data_collator, model=model, ) trainer.train() ``` when I finish the training, the results show these: ``` Step | Training Loss -- | -- 500 | 0.000000 1000 | 0.000000 1500 | 0.000000 2000 | 0.000000 2500 | 0.000000 3000 | 0.000000 TrainOutput(global_step=3130, training_loss=0.0, metrics={'train_runtime': 367.6023, 'train_samples_per_second': 272.033, 'train_steps_per_second': 8.515, 'total_flos': 1779771658266624.0, 'train_loss': 0.0, 'epoch': 10.0}) ``` ### Expected behavior How can I modify the code to resolve this issue?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24850/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24850/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24849
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24849/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24849/comments
https://api.github.com/repos/huggingface/transformers/issues/24849/events
https://github.com/huggingface/transformers/issues/24849
1,807,213,466
I_kwDOCUB6oc5rt-Oa
24,849
unscale_() has already been called on this optimizer since the last update().
{ "login": "paxvinci", "id": 5947642, "node_id": "MDQ6VXNlcjU5NDc2NDI=", "avatar_url": "https://avatars.githubusercontent.com/u/5947642?v=4", "gravatar_id": "", "url": "https://api.github.com/users/paxvinci", "html_url": "https://github.com/paxvinci", "followers_url": "https://api.github.com/users/paxvinci/followers", "following_url": "https://api.github.com/users/paxvinci/following{/other_user}", "gists_url": "https://api.github.com/users/paxvinci/gists{/gist_id}", "starred_url": "https://api.github.com/users/paxvinci/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/paxvinci/subscriptions", "organizations_url": "https://api.github.com/users/paxvinci/orgs", "repos_url": "https://api.github.com/users/paxvinci/repos", "events_url": "https://api.github.com/users/paxvinci/events{/privacy}", "received_events_url": "https://api.github.com/users/paxvinci/received_events", "type": "User", "site_admin": false }
[ { "id": 5616426447, "node_id": "LA_kwDOCUB6oc8AAAABTsPdzw", "url": "https://api.github.com/repos/huggingface/transformers/labels/solved", "name": "solved", "color": "B1D6DC", "default": false, "description": "" } ]
closed
false
null
[]
[ "cc @muellerzr and @pacman100 ", "Hello @paxvinci, I am running following example and unable to reproduce the issue:\r\n\r\nCommand: \r\n```\r\ncd transformers\r\n\r\npython examples/pytorch/language-modeling/run_clm.py --model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --do_train --do_eval --output_dir /tmp/test-clm --gradient_accumulation_steps 6 --overwrite_output_dir\r\n```\r\n\r\noutput logs\r\n```\r\n[INFO|trainer.py:1686] 2023-07-17 15:47:49,578 >> ***** Running training *****\r\n[INFO|trainer.py:1687] 2023-07-17 15:47:49,578 >> Num examples = 2,318\r\n[INFO|trainer.py:1688] 2023-07-17 15:47:49,578 >> Num Epochs = 3\r\n[INFO|trainer.py:1689] 2023-07-17 15:47:49,578 >> Instantaneous batch size per device = 8\r\n[INFO|trainer.py:1692] 2023-07-17 15:47:49,578 >> Total train batch size (w. parallel, distributed & accumulation) = 48\r\n[INFO|trainer.py:1693] 2023-07-17 15:47:49,578 >> Gradient Accumulation steps = 6\r\n[INFO|trainer.py:1694] 2023-07-17 15:47:49,578 >> Total optimization steps = 144\r\n[INFO|trainer.py:1695] 2023-07-17 15:47:49,578 >> Number of trainable parameters = 124,439,808\r\n[INFO|integrations.py:716] 2023-07-17 15:47:49,579 >> Automatic Weights & Biases logging enabled, to disable set os.environ[\"WANDB_DISABLED\"] = \"true\"\r\nwandb: Currently logged in as: smangrul. Use `wandb login --relogin` to force relogin\r\nwandb: Tracking run with wandb version 0.15.5\r\nwandb: Run data is saved locally in /home/sourab/transformers/examples/pytorch/language-modeling/wandb/run-20230717_154750-20eekm9c\r\nwandb: Run `wandb offline` to turn off syncing.\r\nwandb: Syncing run usual-dragon-27\r\nwandb: โญ๏ธ View project at https://wandb.ai/smangrul/huggingface\r\nwandb: ๐Ÿš€ View run at https://wandb.ai/smangrul/huggingface/runs/20eekm9c\r\n100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 144/144 [09:01<00:00, 3.76s/it][INFO|trainer.py:1934] 2023-07-17 15:56:56,376 >> \r\n\r\nTraining completed. Do not forget to share your model on huggingface.co/models =)\r\n\r\n\r\n{'train_runtime': 546.7981, 'train_samples_per_second': 12.718, 'train_steps_per_second': 0.263, 'train_loss': 3.233305189344618, 'epoch': 2.98}\r\n100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 144/144 [09:01<00:00, 3.76s/it]\r\n[INFO|trainer.py:2807] 2023-07-17 15:56:56,378 >> Saving model checkpoint to /tmp/test-clm\r\n[INFO|configuration_utils.py:458] 2023-07-17 15:56:56,378 >> Configuration saved in /tmp/test-clm/config.json\r\n[INFO|configuration_utils.py:375] 2023-07-17 15:56:56,379 >> Configuration saved in /tmp/test-clm/generation_config.json\r\n[INFO|modeling_utils.py:1851] 2023-07-17 15:56:57,203 >> Model weights saved in /tmp/test-clm/pytorch_model.bin\r\n[INFO|tokenization_utils_base.py:2214] 2023-07-17 15:56:57,203 >> tokenizer config file saved in /tmp/test-clm/tokenizer_config.json\r\n[INFO|tokenization_utils_base.py:2221] 2023-07-17 15:56:57,204 >> Special tokens file saved in /tmp/test-clm/special_tokens_map.json\r\n***** train metrics *****\r\n epoch = 2.98\r\n train_loss = 3.2333\r\n train_runtime = 0:09:06.79\r\n train_samples = 2318\r\n train_samples_per_second = 12.718\r\n train_steps_per_second = 0.263\r\n07/17/2023 15:56:57 - INFO - __main__ - *** Evaluate ***\r\n[INFO|trainer.py:3081] 2023-07-17 15:56:57,284 >> ***** Running Evaluation *****\r\n[INFO|trainer.py:3083] 2023-07-17 15:56:57,284 >> Num examples = 240\r\n[INFO|trainer.py:3086] 2023-07-17 15:56:57,284 >> Batch size = 8\r\n100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 30/30 [00:07<00:00, 4.20it/s]\r\n***** eval metrics *****\r\n epoch = 2.98\r\n eval_accuracy = 0.4212\r\n eval_loss = 3.0811\r\n eval_runtime = 0:00:07.36\r\n eval_samples = 240\r\n eval_samples_per_second = 32.588\r\n eval_steps_per_second = 4.074\r\n perplexity = 21.7826\r\nwandb: Waiting for W&B process to finish... (success).\r\nwandb: \\ 0.015 MB of 0.015 MB uploaded (0.000 MB deduped)\r\nwandb: Run history:\r\nwandb: eval/accuracy โ–\r\nwandb: eval/loss โ–\r\nwandb: eval/runtime โ–\r\nwandb: eval/samples_per_second โ–\r\nwandb: eval/steps_per_second โ–\r\nwandb: train/epoch โ–โ–\r\nwandb: train/global_step โ–โ–\r\nwandb: train/total_flos โ–\r\nwandb: train/train_loss โ–\r\nwandb: train/train_runtime โ–\r\nwandb: train/train_samples_per_second โ–\r\nwandb: train/train_steps_per_second โ–\r\nwandb: \r\nwandb: Run summary:\r\nwandb: eval/accuracy 0.42115\r\nwandb: eval/loss 3.08111\r\nwandb: eval/runtime 7.3646\r\nwandb: eval/samples_per_second 32.588\r\nwandb: eval/steps_per_second 4.074\r\nwandb: train/epoch 2.98\r\nwandb: train/global_step 144\r\nwandb: train/total_flos 3610010714112000.0\r\nwandb: train/train_loss 3.23331\r\nwandb: train/train_runtime 546.7981\r\nwandb: train/train_samples_per_second 12.718\r\nwandb: train/train_steps_per_second 0.263\r\nwandb: \r\nwandb: ๐Ÿš€ View run usual-dragon-27 at: https://wandb.ai/smangrul/huggingface/runs/20eekm9c\r\nwandb: Synced 6 W&B file(s), 0 media file(s), 0 artifact file(s) and 0 other file(s)\r\n```\r\n\r\nUsing latest transformers and accelerate main branch", "Please share a minimal reproducer so that we can deep dive if the issue still persists", "I cannot share the json file due to confidential data. I reinstalled the last transformers and I restarted the train session. If I'll face again the error I'll send an update.", "Update: I downloaded the last version of the transformers via pip and I started again the training. After a couple of problems due to BSOD I restarted the training from checkpoints but I still receive \"**Can't find a valid checkpoint at**\" . There is a warning after the creation of the model\r\n```\r\nThe tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization.\r\nThe tokenizer class you load from this checkpoint is 'LLaMATokenizer'.\r\nThe class this function is called from is 'LlamaTokenizer'.\r\nLLAMA Tokenizer created LlamaTokenizer(name_or_path='decapoda-research/llama-7b-hf', vocab_size=32000, model_max_length=1000000000000000019884624838656, is_fast=False, padding_side='right', truncation_side='right', special_tokens={'bos_token': AddedToken(\"\", rstrip=False, lstrip=False, single_word=False, normalized=True), 'eos_token': AddedToken(\"\", rstrip=False, lstrip=False, single_word=False, normalized=True), 'unk_token': AddedToken(\"\", rstrip=False, lstrip=False, single_word=False, normalized=True)}, clean_up_tokenization_spaces=False)\r\n\r\n```\r\nI tried to chage from LlamaTokenizer to LLaMATokenizer but the class does not exists.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,689
1,693
1,693
NONE
null
Hi all, I'm facing the error in the subject. I saw this problem have been already solved but I still have this. This is how I configured the parameters for the trainer. ``` trainer = transformers.Trainer( model=model, # model is decapoda-research/llama-7b-hf train_dataset=data["train"], args=transformers.TrainingArguments( per_device_train_batch_size=MICRO_BATCH_SIZE, # 4 micro batch size gradient_accumulation_steps=GRADIENT_ACCUMULATION_STEPS, # 16 auto_find_batch_size=False, # set True to avoid unscale() problem warmup_steps=100, num_train_epochs=EPOCHS, #2 epochs learning_rate=LEARNING_RATE, # 3e-4 fp16=True, logging_steps=20, optim="adamw_torch", output_dir=NAME, save_total_limit=3, save_strategy="steps", save_steps=200, ), data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False), ) ``` The strange behaviour is that the problem raises after the end of the first epoch. ``` {'loss': 0.8378, 'learning_rate': 0.00016153846153846153, 'epoch': 0.99} 50%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Œ | 831/1660 [15:57<6:52:51, 29.88s/it] Traceback (most recent call last): File "/home/paco/dev/stambecco/train.py", line 138, in <module> trainer.train(resume_from_checkpoint=checkpoint_flag) File "/home/paco/.local/lib/python3.10/site-packages/transformers/trainer.py", line 1539, in train return inner_training_loop( File "/home/paco/.local/lib/python3.10/site-packages/transformers/trainer.py", line 1850, in _inner_training_loop self.accelerator.clip_grad_norm_( File "/home/paco/.local/lib/python3.10/site-packages/accelerate/accelerator.py", line 1893, in clip_grad_norm_ self.unscale_gradients() File "/home/paco/.local/lib/python3.10/site-packages/accelerate/accelerator.py", line 1856, in unscale_gradients self.scaler.unscale_(opt) File "/home/paco/.local/lib/python3.10/site-packages/torch/cuda/amp/grad_scaler.py", line 275, in unscale_ raise RuntimeError("unscale_() has already been called on this optimizer since the last update().") RuntimeError: unscale_() has already been called on this optimizer since the last update(). 50%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ | 831/1660 [16:27<16:24, 1.19s/it] ``` ### System Info The environment is WSL `Linux 5.15.90.1-microsoft-standard-WSL2 #1 SMP Fri Jan 27 02:56:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux ` **pip list** ``` Package Version ------------------------ ------------- accelerate 0.20.3 aiohttp 3.8.4 aiosignal 1.3.1 async-timeout 4.0.2 attrs 23.1.0 bitsandbytes 0.39.1 blinker 1.4 certifi 2022.12.7 charset-normalizer 2.1.1 cmake 3.25.0 command-not-found 0.3 cryptography 3.4.8 datasets 2.13.0 dbus-python 1.2.18 dill 0.3.6 distro 1.7.0 distro-info 1.1build1 filelock 3.9.0 frozenlist 1.3.3 fsspec 2023.6.0 httplib2 0.20.2 huggingface-hub 0.15.1 idna 3.4 importlib-metadata 4.6.4 jeepney 0.7.1 Jinja2 3.1.2 keyring 23.5.0 launchpadlib 1.10.16 lazr.restfulclient 0.14.4 lazr.uri 1.0.6 lit 15.0.7 loralib 0.1.1 MarkupSafe 2.1.2 more-itertools 8.10.0 mpmath 1.2.1 multidict 6.0.4 multiprocess 0.70.14 netifaces 0.11.0 networkx 3.0 numpy 1.24.1 nvidia-cublas-cu11 11.10.3.66 nvidia-cuda-cupti-cu11 11.7.101 nvidia-cuda-nvrtc-cu11 11.7.99 nvidia-cuda-runtime-cu11 11.7.99 nvidia-cudnn-cu11 8.5.0.96 nvidia-cufft-cu11 10.9.0.58 nvidia-curand-cu11 10.2.10.91 nvidia-cusolver-cu11 11.4.0.1 nvidia-cusparse-cu11 11.7.4.91 nvidia-nccl-cu11 2.14.3 nvidia-nvtx-cu11 11.7.91 oauthlib 3.2.0 packaging 23.1 pandas 2.0.2 peft 0.4.0.dev0 Pillow 9.3.0 pip 22.0.2 psutil 5.9.5 pyarrow 12.0.1 PyGObject 3.42.1 PyJWT 2.3.0 pyparsing 2.4.7 python-apt 2.4.0+ubuntu1 python-dateutil 2.8.2 pytz 2023.3 PyYAML 5.4.1 regex 2023.6.3 requests 2.28.1 safetensors 0.3.1 scipy 1.10.1 SecretStorage 3.3.1 sentencepiece 0.1.99 setuptools 59.6.0 six 1.16.0 ssh-import-id 5.11 sympy 1.11.1 systemd-python 234 tokenizers 0.13.3 torch 2.0.1+cu117 torchaudio 2.0.2+cu117 torchvision 0.15.2+cu117 tqdm 4.65.0 transformers 4.31.0.dev0 triton 2.0.0 typing_extensions 4.4.0 tzdata 2023.3 ubuntu-advantage-tools 8001 ufw 0.36.1 unattended-upgrades 0.1 urllib3 1.26.13 wadllib 1.3.6 wheel 0.37.1 xxhash 3.2.0 yarl 1.9.2 zipp 1.0.0 ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` tokenizer = LlamaTokenizer.from_pretrained( BASE_MODEL, add_eos_token=True ) model = prepare_model_for_int8_training(model) print("Preparing LoRA weights") config = LoraConfig( r=LORA_R, lora_alpha=LORA_ALPHA, target_modules=["q_proj", "v_proj"], lora_dropout=LORA_DROPOUT, bias="none", task_type="CAUSAL_LM", ) model = get_peft_model(model, config) tokenizer.pad_token_id = 0 # We want this to be different from the eos token if DATA_PATH.endswith(".json") or DATA_PATH.endswith(".jsonl"): data = load_dataset("json", data_files=DATA_PATH) else: data = load_dataset(DATA_PATH) # Functions tokenize() and generate_prompt() read the json file with the following format: # { # "instruction": "", # "input": "", # "output": "" # }, data = data.shuffle().map(lambda x: tokenize(generate_prompt(x))) model.print_trainable_parameters() trainer = transformers.Trainer( model=model, train_dataset=data["train"], args=transformers.TrainingArguments( per_device_train_batch_size=MICRO_BATCH_SIZE, gradient_accumulation_steps=GRADIENT_ACCUMULATION_STEPS, auto_find_batch_size=False, warmup_steps=100, num_train_epochs=EPOCHS, learning_rate=LEARNING_RATE, fp16=True, logging_steps=20, optim="adamw_torch", output_dir=NAME, save_total_limit=3, save_strategy="steps", save_steps=200, ), data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False), ) model.config.use_cache = False checkpoint_folder = os.path.join(os.getcwd(), NAME) # check if the checkpoint folder exists and is not empty checkpoint_flag = os.path.isdir(checkpoint_folder) and len(os.listdir(checkpoint_folder))> 0 print(f"Does a checkpoint folder exists? {checkpoint_flag}\n") trainer.train(resume_from_checkpoint=checkpoint_flag) model.save_pretrained(f"models/{NAME}") ``` ### Expected behavior Not raising the error and continue with the epoch #2
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24849/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24849/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24848
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24848/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24848/comments
https://api.github.com/repos/huggingface/transformers/issues/24848/events
https://github.com/huggingface/transformers/pull/24848
1,807,145,233
PR_kwDOCUB6oc5VozPe
24,848
[DOCS] Example for `LogitsProcessor` class
{ "login": "shauray8", "id": 39147312, "node_id": "MDQ6VXNlcjM5MTQ3MzEy", "avatar_url": "https://avatars.githubusercontent.com/u/39147312?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shauray8", "html_url": "https://github.com/shauray8", "followers_url": "https://api.github.com/users/shauray8/followers", "following_url": "https://api.github.com/users/shauray8/following{/other_user}", "gists_url": "https://api.github.com/users/shauray8/gists{/gist_id}", "starred_url": "https://api.github.com/users/shauray8/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shauray8/subscriptions", "organizations_url": "https://api.github.com/users/shauray8/orgs", "repos_url": "https://api.github.com/users/shauray8/repos", "events_url": "https://api.github.com/users/shauray8/events{/privacy}", "received_events_url": "https://api.github.com/users/shauray8/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You'll probably need to run `make fixup` before you next commit :)", "Previously I was having some problems with `make fixup` but now it's done and reformatted, I really like how the Makefile is structured. ", "@sgugger note: the weird part of the diff seems innocuous ๐Ÿค” ", "_The documentation is not available anymore as the PR was closed or merged._", "@sgugger I have addressed all the suggested changes. ", "Thanks!", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24848). All of your documentation changes will be reflected on that endpoint." ]
1,689
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? Added some doc string to `RepetitionPenaltyLogitsProcessor` with some examples as well. @gante let me know if there's anything else I should add or remove from the docs. Fixes # (issue) #24783 ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24848/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24848/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24848", "html_url": "https://github.com/huggingface/transformers/pull/24848", "diff_url": "https://github.com/huggingface/transformers/pull/24848.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24848.patch", "merged_at": 1689862181000 }
https://api.github.com/repos/huggingface/transformers/issues/24847
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24847/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24847/comments
https://api.github.com/repos/huggingface/transformers/issues/24847/events
https://github.com/huggingface/transformers/issues/24847
1,806,860,396
I_kwDOCUB6oc5rsoBs
24,847
Trainer logs to wrong wandb project
{ "login": "david-waterworth", "id": 5028974, "node_id": "MDQ6VXNlcjUwMjg5NzQ=", "avatar_url": "https://avatars.githubusercontent.com/u/5028974?v=4", "gravatar_id": "", "url": "https://api.github.com/users/david-waterworth", "html_url": "https://github.com/david-waterworth", "followers_url": "https://api.github.com/users/david-waterworth/followers", "following_url": "https://api.github.com/users/david-waterworth/following{/other_user}", "gists_url": "https://api.github.com/users/david-waterworth/gists{/gist_id}", "starred_url": "https://api.github.com/users/david-waterworth/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/david-waterworth/subscriptions", "organizations_url": "https://api.github.com/users/david-waterworth/orgs", "repos_url": "https://api.github.com/users/david-waterworth/repos", "events_url": "https://api.github.com/users/david-waterworth/events{/privacy}", "received_events_url": "https://api.github.com/users/david-waterworth/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @muellerzr ", "This actually isn't possible currently, you need to set environmental variables such as `export WANDB_PROJECT my-project` ", "@david-waterworth You can use the os module to set the environment variable or use what @muellerzr suggested\r\n```\r\nimport os\r\nos.environ[\"WANDB_PROJECT\"] = \"my-project\"\r\n```", "Thanks @muellerzr but I'm not sure what you mean by not possible? It is possible in general, the steps I follow are:\r\n\r\n1. Create a new project\r\n\r\n``` bash\r\nmkdir my_project\r\ncd my_project\r\npython3 -m venv .venv\r\nsource .venv/bin/activate\r\npip install -U pip\r\n```\r\n\r\n3. Initialise wandb\r\n\r\n``` bash\r\npip install wandb\r\nwandb init\r\n```\r\n\r\nIn the last step I create a new project name (say test), this creates `wandb/settings`\r\n\r\n``` bash\r\ncat wandb/settings\r\n```\r\n\r\n```\r\n[default]\r\nentity = my-user\r\nproject = test\r\nbase_url = https://api.wandb.ai\r\n```\r\n\r\n4. Create a script\r\n\r\n``` python\r\nimport wandb\r\nwandb.init() # Don't pass a project name!\r\n\r\nprint(wandb.run.project_name()) # correctly picked up setting from wandb/settings\r\n```\r\n\r\nI'm assuming that what the trainer does is check the env variable, and if its not set, explicitly passes \"huggingface\" as the project - i.e.\r\n\r\n```\r\nwandb.init(\"huggingface\")\r\n```\r\n\r\nAs a workaround, I can parse the settings file myself and set the env variable", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,689
1,692
1,692
NONE
null
### System Info - `transformers` version: 4.30.2 - Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I've written my own `train.py` based on [sequence_classification](https://huggingface.co/docs/transformers/tasks/sequence_classification). I see the same issue for official scripts. before running I initialise wandb logging (`wandb init`). This creates `wandb/settings` containing the project name I chose for my model ``` [default] entity = my-username project = my-project base_url = https://api.wandb.ai ``` But the Trainer logs everything to the project `huggingface`, i.e. it's ignoring/overriding the project name I've configured. ### Expected behavior Don' over-ride the configured wandb project with a default.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24847/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24847/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24846
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24846/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24846/comments
https://api.github.com/repos/huggingface/transformers/issues/24846/events
https://github.com/huggingface/transformers/issues/24846
1,806,785,090
I_kwDOCUB6oc5rsVpC
24,846
bloom add_prefix_space= True
{ "login": "Dongximing", "id": 35741613, "node_id": "MDQ6VXNlcjM1NzQxNjEz", "avatar_url": "https://avatars.githubusercontent.com/u/35741613?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Dongximing", "html_url": "https://github.com/Dongximing", "followers_url": "https://api.github.com/users/Dongximing/followers", "following_url": "https://api.github.com/users/Dongximing/following{/other_user}", "gists_url": "https://api.github.com/users/Dongximing/gists{/gist_id}", "starred_url": "https://api.github.com/users/Dongximing/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dongximing/subscriptions", "organizations_url": "https://api.github.com/users/Dongximing/orgs", "repos_url": "https://api.github.com/users/Dongximing/repos", "events_url": "https://api.github.com/users/Dongximing/events{/privacy}", "received_events_url": "https://api.github.com/users/Dongximing/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false } ]
[ "cc @ArthurZucker and @younesbelkada ", "@younesbelkada @ArthurZucker ", "cc @ArthurZucker as he is more familiar than me regarding tokenizers", "Hey! Thanks for opening this issue. This is half a `tokenizers` issue ( even if you save the tokenizer and modify the `tokenizer_config.json` to set `add_prefix_space=True` in the `pre_tokenizer` the outputs are the same) and half a transformers issue (setting `add_prefix_space=False` and then saving does not change the value saved!) \r\n\r\nWill try to fix it ๐Ÿ‘๐Ÿป ", "Very nice catch, opening a fix right now! There is an issue with sequence pre_tokenizers! " ]
1,689
1,692
1,692
NONE
null
### System Info Hi dear officer I use Bloom BloomTokenizerFast as a tokenizer. here is an issue. Version =4.28.0 when I use BloomTokenizerFast, I find the add_prefix_space= True is not useful. Here is the code. `tokenizer = BloomTokenizerFast.from_pretrained("bigscience/bloom",add_prefix_space = True) print(tokenizer.add_prefix_space) print(tokenizer("Hello world")["input_ids"]) print(transformers.__version__) True [59414, 8876] 4.28.0 ` here is other code. `from transformers import BloomTokenizerFast tokenizer = BloomTokenizerFast.from_pretrained("bigscience/bloom") print(tokenizer("Hello world")["input_ids"]) [59414, 8876] ` I don't know why they will encode the same result. please have a look! Thanks ### Who can help? @Arth ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction It should encode different results. since add_prefix_space= True. ### Expected behavior It should encode different results. since add_prefix_space= True.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24846/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24846/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24845
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24845/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24845/comments
https://api.github.com/repos/huggingface/transformers/issues/24845/events
https://github.com/huggingface/transformers/issues/24845
1,806,554,291
I_kwDOCUB6oc5rrdSz
24,845
is_vision_available() fails when using pillow-simd
{ "login": "davidas1", "id": 46293514, "node_id": "MDQ6VXNlcjQ2MjkzNTE0", "avatar_url": "https://avatars.githubusercontent.com/u/46293514?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davidas1", "html_url": "https://github.com/davidas1", "followers_url": "https://api.github.com/users/davidas1/followers", "following_url": "https://api.github.com/users/davidas1/following{/other_user}", "gists_url": "https://api.github.com/users/davidas1/gists{/gist_id}", "starred_url": "https://api.github.com/users/davidas1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidas1/subscriptions", "organizations_url": "https://api.github.com/users/davidas1/orgs", "repos_url": "https://api.github.com/users/davidas1/repos", "events_url": "https://api.github.com/users/davidas1/events{/privacy}", "received_events_url": "https://api.github.com/users/davidas1/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[ { "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false } ]
[ "Thanks for reporting! This might be because of the new way we test if packages are available. @ydshieh would you have some bandwidth to take a look at this?", "Sure, will take a look today!" ]
1,689
1,689
1,689
NONE
null
### System Info - `transformers` version: 4.30.2 - Platform: Linux-5.11.0-1022-aws-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.0+cu118 (True) - Tensorflow version (GPU?): 2.8.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @sgugger @apbard - additional checks introduced in https://github.com/huggingface/transformers/pull/23163 are failing on my environment. This is because I'm using [pillow-simd](https://github.com/uploadcare/pillow-simd) and not the vanilla pillow package. Is this drop-in pillow replacement not supported or is it possible to introduce additional logic to support this? ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction First install pillow-simd (`pip install pillow-simd`) Run the following code: ``` import PIL print(PIL.__version__) import transformers print(transformers.utils.import_utils.is_vision_available()) ``` ### Expected behavior Previous versions (before 4.30.*) return `True`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24845/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24845/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24844
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24844/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24844/comments
https://api.github.com/repos/huggingface/transformers/issues/24844/events
https://github.com/huggingface/transformers/issues/24844
1,806,415,201
I_kwDOCUB6oc5rq7Vh
24,844
AttributeError: 'DeepSpeedCPUAdam' object has no attribute 'ds_opt_adam' for multinode training
{ "login": "larrylawl", "id": 40198156, "node_id": "MDQ6VXNlcjQwMTk4MTU2", "avatar_url": "https://avatars.githubusercontent.com/u/40198156?v=4", "gravatar_id": "", "url": "https://api.github.com/users/larrylawl", "html_url": "https://github.com/larrylawl", "followers_url": "https://api.github.com/users/larrylawl/followers", "following_url": "https://api.github.com/users/larrylawl/following{/other_user}", "gists_url": "https://api.github.com/users/larrylawl/gists{/gist_id}", "starred_url": "https://api.github.com/users/larrylawl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/larrylawl/subscriptions", "organizations_url": "https://api.github.com/users/larrylawl/orgs", "repos_url": "https://api.github.com/users/larrylawl/repos", "events_url": "https://api.github.com/users/larrylawl/events{/privacy}", "received_events_url": "https://api.github.com/users/larrylawl/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "For the error log\r\n\r\n```bash\r\nx1000c0s6b0n0: File \"/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/runtime/engine.py\", line 1229, in _configure_basic_optimizer\r\nx1000c0s6b0n0: optimizer = DeepSpeedCPUAdam(model_parameters,\r\n```\r\n\r\nI believe it's best to open an issue on [DeepSpeed GitHub issues](https://github.com/microsoft/DeepSpeed/issues) page for this.\r\n\r\nIt's likely a deepspeed version issue.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,689
1,692
1,692
NONE
null
### System Info - `transformers` version: 4.31.0.dev0 - Platform: Linux-4.18.0-305.25.1.el8_4.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.12 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 1.13.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: multinode distributed setup - Deepspeed version: 0.9.5 ### Who can help? @pacman100 ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction To reproduce, run the following with your `hostfile` and `deepspeed` config specified. I used the zero3 config [here](https://huggingface.co/docs/transformers/main_classes/deepspeed#zero3-config). ``` deepspeed --hostfile=$PBS_O_WORKDIR/hostfile $ROOT_DIR/transformers/examples/pytorch/language-modeling/run_clm.py \ --model_name_or_path gpt2 \ --dataset_name wikitext \ --dataset_config_name wikitext-2-raw-v1 \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 8 \ --do_train \ --do_eval \ --output_dir ~/scratch/finetuned_models/debug \ --deepspeed "$ROOT_DIR/FastChat/fastchat/ds_config/ds_config_zero3.json" ``` I ran into the error `AttributeError: 'DeepSpeedCPUAdam' object has no attribute 'ds_opt_adam'`. Here's the logs: ``` x1000c1s1b0n0: warn(msg) x1000c1s1b0n0: /home/users/industry/dso/lannliat/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/opt/cray/pe/cce/13.0.2/cce/aarch64/lib/libcray-c++-rts.a')} x1000c1s1b0n0: warn(msg) x1000c1s1b0n0: /home/users/industry/dso/lannliat/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/opt/cray/pe/cti/2.15.10/lib/pkgconfig')} x1000c1s1b0n0: warn(msg) x1000c1s1b0n0: /home/users/industry/dso/lannliat/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/opt/cray/pe/cce/13.0.2/cce/aarch64/lib')} x1000c1s1b0n0: warn(msg) x1000c1s1b0n0: /home/users/industry/dso/lannliat/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/opt/cray/pe/cce/13.0.2/cce/x86_64/share/nls/En/%N.cat')} x1000c1s1b0n0: warn(msg) x1000c1s1b0n0: /home/users/industry/dso/lannliat/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/opt/cray/pe/mpich/8.1.15/ofi/@PRGENV@/@PE_MPICH_GENCOMPS@/lib/pkgconfig')} x1000c1s1b0n0: warn(msg) x1000c1s1b0n0: /home/users/industry/dso/lannliat/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('libfabric/1.11.0.4.125'), PosixPath('cray-pals/1.1.6'), PosixPath('craype-x86-rome'), PosixPath('craype-network-ofi'), PosixPath('cce/13.0.2'), PosixPath('cray-dsmml/0.2.2'), PosixPath('perftools-base/22.04.0'), PosixPath('cray-mpich/8.1.15'), PosixPath('craype/2.7.15'), PosixPath('PrgEnv-cray/8.3.3')} x1000c1s1b0n0: warn(msg) x1000c1s1b0n0: /home/users/industry/dso/lannliat/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/opt/cray/pe/cce/13.0.2/cce/aarch64/include/craylibs')} x1000c1s1b0n0: warn(msg) x1000c1s1b0n0: /home/users/industry/dso/lannliat/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/opt/gcc-cross-aarch64/8.1.0/aarch64')} x1000c1s1b0n0: warn(msg) x1000c1s1b0n0: /home/users/industry/dso/lannliat/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/opt/cray/pe/fftw/3.3.8.13/@PE_FFTW_DEFAULT_TARGET@/lib/pkgconfig')} x1000c1s1b0n0: warn(msg) x1000c1s1b0n0: /home/users/industry/dso/lannliat/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/opt/cray/pe/libsci/21.08.1.2/@PRGENV@/@PE_LIBSCI_DEFAULT_GENCOMPS@/@PE_LIBSCI_DEFAULT_TARGET@/lib/pkgconfig')} x1000c1s1b0n0: warn(msg) x1000c1s1b0n0: /home/users/industry/dso/lannliat/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/opt/cray/pe/sma/11.5.3.beta/ofi/sma@PE_SMA_DEFAULT_DIR_DEFAULT64@/lib64/pkgconfig')} x1000c1s1b0n0: warn(msg) x1000c1s1b0n0: CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths... x1000c1s1b0n0: /home/users/industry/dso/lannliat/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/opt/clmgr/man'), PosixPath('/opt/cray/pe/man')} x1000c1s1b0n0: warn(msg) x1000c1s1b0n0: /home/users/industry/dso/lannliat/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/opt/modulefiles')} x1000c1s1b0n0: warn(msg) x1000c1s1b0n0: /home/users/industry/dso/lannliat/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('libexec64/opts')} x1000c1s1b0n0: warn(msg) x1000c1s1b0n0: CUDA SETUP: Highest compute capability among GPUs detected: 8.0 x1000c1s1b0n0: CUDA SETUP: Detected CUDA version 116 x1000c1s1b0n0: CUDA SETUP: Loading binary /home/users/industry/dso/lannliat/.local/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so... x1000c1s1b0n0: /home/users/industry/dso/lannliat/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('() { eval `/opt/cray/pe/modules/3.2.11.6/bin/modulecmd bash $*`\n}')} x1000c1s1b0n0: warn(msg) x1000c1s1b0n0: /home/users/industry/dso/lannliat/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/usr/local/cuda/lib64')} x1000c1s1b0n0: warn(msg) x1000c1s1b0n0: /home/users/industry/dso/lannliat/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: No libcudart.so found! Install CUDA or the cudatoolkit package (anaconda)! x1000c1s1b0n0: warn(msg) x1000c1s1b0n0: [2023-07-16 11:23:41,585] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect) x1000c1s1b0n0: [2023-07-16 11:23:41,586] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect) x1000c0s6b0n0: [2023-07-16 11:23:44,841] [WARNING] [comm.py:152:init_deepspeed_backend] NCCL backend in DeepSpeed not yet implemented x1000c0s6b0n0: [2023-07-16 11:23:44,841] [INFO] [comm.py:594:init_distributed] cdb=None x1000c0s6b0n0: [2023-07-16 11:23:44,841] [INFO] [comm.py:625:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl x1000c0s6b0n0: [2023-07-16 11:23:44,842] [WARNING] [comm.py:152:init_deepspeed_backend] NCCL backend in DeepSpeed not yet implemented x1000c0s6b0n0: [2023-07-16 11:23:44,842] [INFO] [comm.py:594:init_distributed] cdb=None x1000c1s1b0n0: [2023-07-16 11:23:46,769] [WARNING] [comm.py:152:init_deepspeed_backend] NCCL backend in DeepSpeed not yet implemented x1000c1s1b0n0: [2023-07-16 11:23:46,770] [WARNING] [comm.py:152:init_deepspeed_backend] NCCL backend in DeepSpeed not yet implemented x1000c1s1b0n0: [2023-07-16 11:23:46,770] [INFO] [comm.py:594:init_distributed] cdb=None x1000c1s1b0n0: [2023-07-16 11:23:46,770] [INFO] [comm.py:594:init_distributed] cdb=None x1000c0s6b0n0: 07/16/2023 11:23:47 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 1distributed training: True, 16-bits training: False x1000c0s6b0n0: 07/16/2023 11:23:47 - INFO - __main__ - Training/evaluation parameters TrainingArguments( x1000c0s6b0n0: _n_gpu=1, x1000c0s6b0n0: adafactor=False, x1000c0s6b0n0: adam_beta1=0.9, x1000c0s6b0n0: adam_beta2=0.999, x1000c0s6b0n0: adam_epsilon=1e-08, x1000c0s6b0n0: auto_find_batch_size=False, x1000c0s6b0n0: bf16=False, x1000c0s6b0n0: bf16_full_eval=False, x1000c0s6b0n0: data_seed=None, x1000c0s6b0n0: dataloader_drop_last=False, x1000c0s6b0n0: dataloader_num_workers=0, x1000c0s6b0n0: dataloader_pin_memory=True, x1000c0s6b0n0: ddp_backend=None, x1000c0s6b0n0: ddp_broadcast_buffers=None, x1000c0s6b0n0: ddp_bucket_cap_mb=None, x1000c0s6b0n0: ddp_find_unused_parameters=None, x1000c0s6b0n0: ddp_timeout=1800, x1000c0s6b0n0: debug=[], x1000c0s6b0n0: deepspeed=/home/users/industry/dso/lannliat/FastChat/fastchat/ds_config/ds_config_zero3.json, x1000c0s6b0n0: disable_tqdm=False, x1000c0s6b0n0: do_eval=True, x1000c0s6b0n0: do_predict=False, x1000c0s6b0n0: do_train=True, x1000c0s6b0n0: eval_accumulation_steps=None, x1000c0s6b0n0: eval_delay=0, x1000c0s6b0n0: eval_steps=None, x1000c0s6b0n0: evaluation_strategy=no, x1000c0s6b0n0: fp16=False, x1000c0s6b0n0: fp16_backend=auto, x1000c0s6b0n0: fp16_full_eval=False, x1000c0s6b0n0: fp16_opt_level=O1, x1000c0s6b0n0: fsdp=[], x1000c0s6b0n0: fsdp_config={'fsdp_min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False}, x1000c0s6b0n0: fsdp_min_num_params=0, x1000c0s6b0n0: fsdp_transformer_layer_cls_to_wrap=None, x1000c0s6b0n0: full_determinism=False, x1000c0s6b0n0: gradient_accumulation_steps=1, x1000c0s6b0n0: gradient_checkpointing=False, x1000c0s6b0n0: greater_is_better=None, x1000c0s6b0n0: group_by_length=False, x1000c0s6b0n0: half_precision_backend=auto, x1000c0s6b0n0: hub_model_id=None, x1000c0s6b0n0: hub_private_repo=False, x1000c0s6b0n0: hub_strategy=every_save, x1000c0s6b0n0: hub_token=<HUB_TOKEN>, x1000c0s6b0n0: ignore_data_skip=False, x1000c0s6b0n0: include_inputs_for_metrics=False, x1000c0s6b0n0: jit_mode_eval=False, x1000c0s6b0n0: label_names=None, x1000c0s6b0n0: label_smoothing_factor=0.0, x1000c0s6b0n0: learning_rate=5e-05, x1000c0s6b0n0: length_column_name=length, x1000c0s6b0n0: load_best_model_at_end=False, x1000c0s6b0n0: local_rank=0, x1000c0s6b0n0: log_level=passive, x1000c0s6b0n0: log_level_replica=warning, x1000c0s6b0n0: log_on_each_node=True, x1000c0s6b0n0: logging_dir=/home/users/industry/dso/lannliat/scratch/finetuned_models/debug/runs/Jul16_11-23-44_x1000c0s6b0n0, x1000c0s6b0n0: logging_first_step=False, x1000c0s6b0n0: logging_nan_inf_filter=True, x1000c0s6b0n0: logging_steps=500, x1000c0s6b0n0: logging_strategy=steps, x1000c0s6b0n0: lr_scheduler_type=linear, x1000c0s6b0n0: max_grad_norm=1.0, x1000c0s6b0n0: max_steps=-1, x1000c0s6b0n0: metric_for_best_model=None, x1000c0s6b0n0: mp_parameters=, x1000c0s6b0n0: no_cuda=False, x1000c0s6b0n0: num_train_epochs=3.0, x1000c0s6b0n0: optim=adamw_hf, x1000c0s6b0n0: optim_args=None, x1000c0s6b0n0: output_dir=/home/users/industry/dso/lannliat/scratch/finetuned_models/debug, x1000c0s6b0n0: overwrite_output_dir=False, x1000c0s6b0n0: past_index=-1, x1000c0s6b0n0: per_device_eval_batch_size=8, x1000c0s6b0n0: per_device_train_batch_size=8, x1000c0s6b0n0: prediction_loss_only=False, x1000c0s6b0n0: push_to_hub=False, x1000c0s6b0n0: push_to_hub_model_id=None, x1000c0s6b0n0: push_to_hub_organization=None, x1000c0s6b0n0: push_to_hub_token=<PUSH_TO_HUB_TOKEN>, x1000c0s6b0n0: ray_scope=last, x1000c0s6b0n0: remove_unused_columns=True, x1000c0s6b0n0: report_to=['wandb'], x1000c0s6b0n0: resume_from_checkpoint=None, x1000c0s6b0n0: run_name=/home/users/industry/dso/lannliat/scratch/finetuned_models/debug, x1000c0s6b0n0: save_on_each_node=False, x1000c0s6b0n0: save_safetensors=False, x1000c0s6b0n0: save_steps=500, x1000c0s6b0n0: save_strategy=steps, x1000c0s6b0n0: save_total_limit=None, x1000c0s6b0n0: seed=42, x1000c0s6b0n0: sharded_ddp=[], x1000c0s6b0n0: skip_memory_metrics=True, x1000c0s6b0n0: tf32=None, x1000c0s6b0n0: torch_compile=False, x1000c0s6b0n0: torch_compile_backend=None, x1000c0s6b0n0: torch_compile_mode=None, x1000c0s6b0n0: torchdynamo=None, x1000c0s6b0n0: tpu_metrics_debug=False, x1000c0s6b0n0: tpu_num_cores=None, x1000c0s6b0n0: use_ipex=False, x1000c0s6b0n0: use_legacy_prediction_loop=False, x1000c0s6b0n0: use_mps_device=False, x1000c0s6b0n0: warmup_ratio=0.0, x1000c0s6b0n0: warmup_steps=0, x1000c0s6b0n0: weight_decay=0.0, x1000c0s6b0n0: xpu_backend=None, x1000c0s6b0n0: ) x1000c0s6b0n0: 07/16/2023 11:23:47 - WARNING - __main__ - Process rank: 1, device: cuda:1, n_gpu: 1distributed training: True, 16-bits training: False x1000c1s1b0n0: 07/16/2023 11:23:47 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 1distributed training: True, 16-bits training: False x1000c1s1b0n0: 07/16/2023 11:23:47 - INFO - __main__ - Training/evaluation parameters TrainingArguments( x1000c1s1b0n0: _n_gpu=1, x1000c1s1b0n0: adafactor=False, x1000c1s1b0n0: adam_beta1=0.9, x1000c1s1b0n0: adam_beta2=0.999, x1000c1s1b0n0: adam_epsilon=1e-08, x1000c1s1b0n0: auto_find_batch_size=False, x1000c1s1b0n0: bf16=False, x1000c1s1b0n0: bf16_full_eval=False, x1000c1s1b0n0: data_seed=None, x1000c1s1b0n0: dataloader_drop_last=False, x1000c1s1b0n0: dataloader_num_workers=0, x1000c1s1b0n0: dataloader_pin_memory=True, x1000c1s1b0n0: ddp_backend=None, x1000c1s1b0n0: ddp_broadcast_buffers=None, x1000c1s1b0n0: ddp_bucket_cap_mb=None, x1000c1s1b0n0: ddp_find_unused_parameters=None, x1000c1s1b0n0: ddp_timeout=1800, x1000c1s1b0n0: debug=[], x1000c1s1b0n0: deepspeed=/home/users/industry/dso/lannliat/FastChat/fastchat/ds_config/ds_config_zero3.json, x1000c1s1b0n0: disable_tqdm=False, x1000c1s1b0n0: do_eval=True, x1000c1s1b0n0: do_predict=False, x1000c1s1b0n0: do_train=True, x1000c1s1b0n0: eval_accumulation_steps=None, x1000c1s1b0n0: eval_delay=0, x1000c1s1b0n0: eval_steps=None, x1000c1s1b0n0: evaluation_strategy=no, x1000c1s1b0n0: fp16=False, x1000c1s1b0n0: fp16_backend=auto, x1000c1s1b0n0: fp16_full_eval=False, x1000c1s1b0n0: fp16_opt_level=O1, x1000c1s1b0n0: fsdp=[], x1000c1s1b0n0: fsdp_config={'fsdp_min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False}, x1000c1s1b0n0: fsdp_min_num_params=0, x1000c1s1b0n0: fsdp_transformer_layer_cls_to_wrap=None, x1000c1s1b0n0: full_determinism=False, x1000c1s1b0n0: gradient_accumulation_steps=1, x1000c1s1b0n0: gradient_checkpointing=False, x1000c1s1b0n0: greater_is_better=None, x1000c1s1b0n0: group_by_length=False, x1000c1s1b0n0: half_precision_backend=auto, x1000c1s1b0n0: hub_model_id=None, x1000c1s1b0n0: hub_private_repo=False, x1000c1s1b0n0: hub_strategy=every_save, x1000c1s1b0n0: hub_token=<HUB_TOKEN>, x1000c1s1b0n0: ignore_data_skip=False, x1000c1s1b0n0: include_inputs_for_metrics=False, x1000c1s1b0n0: jit_mode_eval=False, x1000c1s1b0n0: label_names=None, x1000c1s1b0n0: label_smoothing_factor=0.0, x1000c1s1b0n0: learning_rate=5e-05, x1000c1s1b0n0: length_column_name=length, x1000c1s1b0n0: load_best_model_at_end=False, x1000c1s1b0n0: local_rank=0, x1000c1s1b0n0: log_level=passive, x1000c1s1b0n0: log_level_replica=warning, x1000c1s1b0n0: log_on_each_node=True, x1000c1s1b0n0: logging_dir=/home/users/industry/dso/lannliat/scratch/finetuned_models/debug/runs/Jul16_11-23-46_x1000c1s1b0n0, x1000c1s1b0n0: logging_first_step=False, x1000c1s1b0n0: logging_nan_inf_filter=True, x1000c1s1b0n0: logging_steps=500, x1000c1s1b0n0: logging_strategy=steps, x1000c1s1b0n0: lr_scheduler_type=linear, x1000c1s1b0n0: max_grad_norm=1.0, x1000c1s1b0n0: max_steps=-1, x1000c1s1b0n0: metric_for_best_model=None, x1000c1s1b0n0: mp_parameters=, x1000c1s1b0n0: no_cuda=False, x1000c1s1b0n0: num_train_epochs=3.0, x1000c1s1b0n0: optim=adamw_hf, x1000c1s1b0n0: optim_args=None, x1000c1s1b0n0: output_dir=/home/users/industry/dso/lannliat/scratch/finetuned_models/debug, x1000c1s1b0n0: overwrite_output_dir=False, x1000c1s1b0n0: past_index=-1, x1000c1s1b0n0: per_device_eval_batch_size=8, x1000c1s1b0n0: per_device_train_batch_size=8, x1000c1s1b0n0: prediction_loss_only=False, x1000c1s1b0n0: push_to_hub=False, x1000c1s1b0n0: push_to_hub_model_id=None, x1000c1s1b0n0: push_to_hub_organization=None, x1000c1s1b0n0: push_to_hub_token=<PUSH_TO_HUB_TOKEN>, x1000c1s1b0n0: ray_scope=last, x1000c1s1b0n0: remove_unused_columns=True, x1000c1s1b0n0: report_to=['wandb'], x1000c1s1b0n0: resume_from_checkpoint=None, x1000c1s1b0n0: run_name=/home/users/industry/dso/lannliat/scratch/finetuned_models/debug, x1000c1s1b0n0: save_on_each_node=False, x1000c1s1b0n0: save_safetensors=False, x1000c1s1b0n0: save_steps=500, x1000c1s1b0n0: save_strategy=steps, x1000c1s1b0n0: save_total_limit=None, x1000c1s1b0n0: seed=42, x1000c1s1b0n0: sharded_ddp=[], x1000c1s1b0n0: skip_memory_metrics=True, x1000c1s1b0n0: tf32=None, x1000c1s1b0n0: torch_compile=False, x1000c1s1b0n0: torch_compile_backend=None, x1000c1s1b0n0: torch_compile_mode=None, x1000c1s1b0n0: torchdynamo=None, x1000c1s1b0n0: tpu_metrics_debug=False, x1000c1s1b0n0: tpu_num_cores=None, x1000c1s1b0n0: use_ipex=False, x1000c1s1b0n0: use_legacy_prediction_loop=False, x1000c1s1b0n0: use_mps_device=False, x1000c1s1b0n0: warmup_ratio=0.0, x1000c1s1b0n0: warmup_steps=0, x1000c1s1b0n0: weight_decay=0.0, x1000c1s1b0n0: xpu_backend=None, x1000c1s1b0n0: ) x1000c1s1b0n0: 07/16/2023 11:23:47 - WARNING - __main__ - Process rank: 1, device: cuda:1, n_gpu: 1distributed training: True, 16-bits training: False x1000c1s1b0n0: 07/16/2023 11:23:49 - INFO - datasets.info - Loading Dataset Infos from /home/users/industry/dso/lannliat/.cache/huggingface/modules/datasets_modules/datasets/wikitext/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126 x1000c1s1b0n0: 07/16/2023 11:23:49 - INFO - datasets.builder - Overwrite dataset info from restored data version if exists. x1000c1s1b0n0: 07/16/2023 11:23:49 - INFO - datasets.info - Loading Dataset info from /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126 x1000c1s1b0n0: 07/16/2023 11:23:49 - WARNING - datasets.builder - Found cached dataset wikitext (/home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126) x1000c1s1b0n0: 07/16/2023 11:23:49 - INFO - datasets.info - Loading Dataset info from /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 3/3 [00:00<00:00, 272.09it/s] x1000c0s6b0n0: 07/16/2023 11:23:50 - INFO - datasets.info - Loading Dataset Infos from /home/users/industry/dso/lannliat/.cache/huggingface/modules/datasets_modules/datasets/wikitext/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126 x1000c0s6b0n0: 07/16/2023 11:23:50 - INFO - datasets.builder - Overwrite dataset info from restored data version if exists. x1000c0s6b0n0: 07/16/2023 11:23:50 - INFO - datasets.info - Loading Dataset info from /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126 x1000c0s6b0n0: 07/16/2023 11:23:50 - WARNING - datasets.builder - Found cached dataset wikitext (/home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126) 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 3/3 [00:00<00:00, 194.61it/s] x1000c0s6b0n0: 07/16/2023 11:23:50 - WARNING - datasets.builder - Found cached dataset wikitext (/home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126) x1000c0s6b0n0: 07/16/2023 11:23:50 - INFO - datasets.info - Loading Dataset info from /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 3/3 [00:00<00:00, 811.12it/s] x1000c1s1b0n0: [INFO|configuration_utils.py:712] 2023-07-16 11:23:50,641 >> loading configuration file config.json from cache at /home/users/industry/dso/lannliat/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/config.json x1000c1s1b0n0: [INFO|configuration_utils.py:768] 2023-07-16 11:23:50,641 >> Model config GPT2Config { x1000c1s1b0n0: "_name_or_path": "gpt2", x1000c1s1b0n0: "activation_function": "gelu_new", x1000c1s1b0n0: "architectures": [ x1000c1s1b0n0: "GPT2LMHeadModel" x1000c1s1b0n0: ], x1000c1s1b0n0: "attn_pdrop": 0.1, x1000c1s1b0n0: "bos_token_id": 50256, x1000c1s1b0n0: "embd_pdrop": 0.1, x1000c1s1b0n0: "eos_token_id": 50256, x1000c1s1b0n0: "initializer_range": 0.02, x1000c1s1b0n0: "layer_norm_epsilon": 1e-05, x1000c1s1b0n0: "model_type": "gpt2", x1000c1s1b0n0: "n_ctx": 1024, x1000c1s1b0n0: "n_embd": 768, x1000c1s1b0n0: "n_head": 12, x1000c1s1b0n0: "n_inner": null, x1000c1s1b0n0: "n_layer": 12, x1000c1s1b0n0: "n_positions": 1024, x1000c1s1b0n0: "reorder_and_upcast_attn": false, x1000c1s1b0n0: "resid_pdrop": 0.1, x1000c1s1b0n0: "scale_attn_by_inverse_layer_idx": false, x1000c1s1b0n0: "scale_attn_weights": true, x1000c1s1b0n0: "summary_activation": null, x1000c1s1b0n0: "summary_first_dropout": 0.1, x1000c1s1b0n0: "summary_proj_to_labels": true, x1000c1s1b0n0: "summary_type": "cls_index", x1000c1s1b0n0: "summary_use_proj": true, x1000c1s1b0n0: "task_specific_params": { x1000c1s1b0n0: "text-generation": { x1000c1s1b0n0: "do_sample": true, x1000c1s1b0n0: "max_length": 50 x1000c1s1b0n0: } x1000c1s1b0n0: }, x1000c1s1b0n0: "transformers_version": "4.31.0.dev0", x1000c1s1b0n0: "use_cache": true, x1000c1s1b0n0: "vocab_size": 50257 x1000c1s1b0n0: } x1000c1s1b0n0: x1000c0s6b0n0: [INFO|configuration_utils.py:712] 2023-07-16 11:23:50,647 >> loading configuration file config.json from cache at /home/users/industry/dso/lannliat/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/config.json x1000c0s6b0n0: [INFO|configuration_utils.py:768] 2023-07-16 11:23:50,648 >> Model config GPT2Config { x1000c0s6b0n0: "_name_or_path": "gpt2", x1000c0s6b0n0: "activation_function": "gelu_new", x1000c0s6b0n0: "architectures": [ x1000c0s6b0n0: "GPT2LMHeadModel" x1000c0s6b0n0: ], x1000c0s6b0n0: "attn_pdrop": 0.1, x1000c0s6b0n0: "bos_token_id": 50256, x1000c0s6b0n0: "embd_pdrop": 0.1, x1000c0s6b0n0: "eos_token_id": 50256, x1000c0s6b0n0: "initializer_range": 0.02, x1000c0s6b0n0: "layer_norm_epsilon": 1e-05, x1000c0s6b0n0: "model_type": "gpt2", x1000c0s6b0n0: "n_ctx": 1024, x1000c0s6b0n0: "n_embd": 768, x1000c0s6b0n0: "n_head": 12, x1000c0s6b0n0: "n_inner": null, x1000c0s6b0n0: "n_layer": 12, x1000c0s6b0n0: "n_positions": 1024, x1000c0s6b0n0: "reorder_and_upcast_attn": false, x1000c0s6b0n0: "resid_pdrop": 0.1, x1000c0s6b0n0: "scale_attn_by_inverse_layer_idx": false, x1000c0s6b0n0: "scale_attn_weights": true, x1000c0s6b0n0: "summary_activation": null, x1000c0s6b0n0: "summary_first_dropout": 0.1, x1000c0s6b0n0: "summary_proj_to_labels": true, x1000c0s6b0n0: "summary_type": "cls_index", x1000c0s6b0n0: "summary_use_proj": true, x1000c0s6b0n0: "task_specific_params": { x1000c0s6b0n0: "text-generation": { x1000c0s6b0n0: "do_sample": true, x1000c0s6b0n0: "max_length": 50 x1000c0s6b0n0: } x1000c0s6b0n0: }, x1000c0s6b0n0: "transformers_version": "4.31.0.dev0", x1000c0s6b0n0: "use_cache": true, x1000c0s6b0n0: "vocab_size": 50257 x1000c0s6b0n0: } x1000c0s6b0n0: x1000c1s1b0n0: 07/16/2023 11:23:50 - WARNING - datasets.builder - Found cached dataset wikitext (/home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126) 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 3/3 [00:00<00:00, 1171.27it/s] x1000c0s6b0n0: [INFO|tokenization_auto.py:512] 2023-07-16 11:23:50,890 >> Could not locate the tokenizer configuration file, will try to use the model config instead. x1000c0s6b0n0: [INFO|configuration_utils.py:712] 2023-07-16 11:23:51,134 >> loading configuration file config.json from cache at /home/users/industry/dso/lannliat/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/config.json x1000c0s6b0n0: [INFO|configuration_utils.py:768] 2023-07-16 11:23:51,135 >> Model config GPT2Config { x1000c0s6b0n0: "_name_or_path": "gpt2", x1000c0s6b0n0: "activation_function": "gelu_new", x1000c0s6b0n0: "architectures": [ x1000c0s6b0n0: "GPT2LMHeadModel" x1000c0s6b0n0: ], x1000c0s6b0n0: "attn_pdrop": 0.1, x1000c0s6b0n0: "bos_token_id": 50256, x1000c0s6b0n0: "embd_pdrop": 0.1, x1000c0s6b0n0: "eos_token_id": 50256, x1000c0s6b0n0: "initializer_range": 0.02, x1000c0s6b0n0: "layer_norm_epsilon": 1e-05, x1000c0s6b0n0: "model_type": "gpt2", x1000c0s6b0n0: "n_ctx": 1024, x1000c0s6b0n0: "n_embd": 768, x1000c0s6b0n0: "n_head": 12, x1000c0s6b0n0: "n_inner": null, x1000c0s6b0n0: "n_layer": 12, x1000c0s6b0n0: "n_positions": 1024, x1000c0s6b0n0: "reorder_and_upcast_attn": false, x1000c0s6b0n0: "resid_pdrop": 0.1, x1000c0s6b0n0: "scale_attn_by_inverse_layer_idx": false, x1000c0s6b0n0: "scale_attn_weights": true, x1000c0s6b0n0: "summary_activation": null, x1000c0s6b0n0: "summary_first_dropout": 0.1, x1000c0s6b0n0: "summary_proj_to_labels": true, x1000c0s6b0n0: "summary_type": "cls_index", x1000c0s6b0n0: "summary_use_proj": true, x1000c0s6b0n0: "task_specific_params": { x1000c0s6b0n0: "text-generation": { x1000c0s6b0n0: "do_sample": true, x1000c0s6b0n0: "max_length": 50 x1000c0s6b0n0: } x1000c0s6b0n0: }, x1000c0s6b0n0: "transformers_version": "4.31.0.dev0", x1000c0s6b0n0: "use_cache": true, x1000c0s6b0n0: "vocab_size": 50257 x1000c0s6b0n0: } x1000c0s6b0n0: x1000c1s1b0n0: [INFO|tokenization_auto.py:512] 2023-07-16 11:23:51,338 >> Could not locate the tokenizer configuration file, will try to use the model config instead. x1000c1s1b0n0: [INFO|configuration_utils.py:712] 2023-07-16 11:23:51,581 >> loading configuration file config.json from cache at /home/users/industry/dso/lannliat/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/config.json x1000c1s1b0n0: [INFO|configuration_utils.py:768] 2023-07-16 11:23:51,582 >> Model config GPT2Config { x1000c1s1b0n0: "_name_or_path": "gpt2", x1000c1s1b0n0: "activation_function": "gelu_new", x1000c1s1b0n0: "architectures": [ x1000c1s1b0n0: "GPT2LMHeadModel" x1000c1s1b0n0: ], x1000c1s1b0n0: "attn_pdrop": 0.1, x1000c1s1b0n0: "bos_token_id": 50256, x1000c1s1b0n0: "embd_pdrop": 0.1, x1000c1s1b0n0: "eos_token_id": 50256, x1000c1s1b0n0: "initializer_range": 0.02, x1000c1s1b0n0: "layer_norm_epsilon": 1e-05, x1000c1s1b0n0: "model_type": "gpt2", x1000c1s1b0n0: "n_ctx": 1024, x1000c1s1b0n0: "n_embd": 768, x1000c1s1b0n0: "n_head": 12, x1000c1s1b0n0: "n_inner": null, x1000c1s1b0n0: "n_layer": 12, x1000c1s1b0n0: "n_positions": 1024, x1000c1s1b0n0: "reorder_and_upcast_attn": false, x1000c1s1b0n0: "resid_pdrop": 0.1, x1000c1s1b0n0: "scale_attn_by_inverse_layer_idx": false, x1000c1s1b0n0: "scale_attn_weights": true, x1000c1s1b0n0: "summary_activation": null, x1000c1s1b0n0: "summary_first_dropout": 0.1, x1000c1s1b0n0: "summary_proj_to_labels": true, x1000c1s1b0n0: "summary_type": "cls_index", x1000c1s1b0n0: "summary_use_proj": true, x1000c1s1b0n0: "task_specific_params": { x1000c1s1b0n0: "text-generation": { x1000c1s1b0n0: "do_sample": true, x1000c1s1b0n0: "max_length": 50 x1000c1s1b0n0: } x1000c1s1b0n0: }, x1000c1s1b0n0: "transformers_version": "4.31.0.dev0", x1000c1s1b0n0: "use_cache": true, x1000c1s1b0n0: "vocab_size": 50257 x1000c1s1b0n0: } x1000c1s1b0n0: x1000c0s6b0n0: [INFO|tokenization_utils_base.py:1843] 2023-07-16 11:23:52,137 >> loading file vocab.json from cache at /home/users/industry/dso/lannliat/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/vocab.json x1000c0s6b0n0: [INFO|tokenization_utils_base.py:1843] 2023-07-16 11:23:52,137 >> loading file merges.txt from cache at /home/users/industry/dso/lannliat/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/merges.txt x1000c0s6b0n0: [INFO|tokenization_utils_base.py:1843] 2023-07-16 11:23:52,137 >> loading file tokenizer.json from cache at /home/users/industry/dso/lannliat/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/tokenizer.json x1000c0s6b0n0: [INFO|tokenization_utils_base.py:1843] 2023-07-16 11:23:52,137 >> loading file added_tokens.json from cache at None x1000c0s6b0n0: [INFO|tokenization_utils_base.py:1843] 2023-07-16 11:23:52,137 >> loading file special_tokens_map.json from cache at None x1000c0s6b0n0: [INFO|tokenization_utils_base.py:1843] 2023-07-16 11:23:52,137 >> loading file tokenizer_config.json from cache at None x1000c0s6b0n0: [INFO|configuration_utils.py:712] 2023-07-16 11:23:52,139 >> loading configuration file config.json from cache at /home/users/industry/dso/lannliat/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/config.json x1000c0s6b0n0: [INFO|configuration_utils.py:768] 2023-07-16 11:23:52,140 >> Model config GPT2Config { x1000c0s6b0n0: "_name_or_path": "gpt2", x1000c0s6b0n0: "activation_function": "gelu_new", x1000c0s6b0n0: "architectures": [ x1000c0s6b0n0: "GPT2LMHeadModel" x1000c0s6b0n0: ], x1000c0s6b0n0: "attn_pdrop": 0.1, x1000c0s6b0n0: "bos_token_id": 50256, x1000c0s6b0n0: "embd_pdrop": 0.1, x1000c0s6b0n0: "eos_token_id": 50256, x1000c0s6b0n0: "initializer_range": 0.02, x1000c0s6b0n0: "layer_norm_epsilon": 1e-05, x1000c0s6b0n0: "model_type": "gpt2", x1000c0s6b0n0: "n_ctx": 1024, x1000c0s6b0n0: "n_embd": 768, x1000c0s6b0n0: "n_head": 12, x1000c0s6b0n0: "n_inner": null, x1000c0s6b0n0: "n_layer": 12, x1000c0s6b0n0: "n_positions": 1024, x1000c0s6b0n0: "reorder_and_upcast_attn": false, x1000c0s6b0n0: "resid_pdrop": 0.1, x1000c0s6b0n0: "scale_attn_by_inverse_layer_idx": false, x1000c0s6b0n0: "scale_attn_weights": true, x1000c0s6b0n0: "summary_activation": null, x1000c0s6b0n0: "summary_first_dropout": 0.1, x1000c0s6b0n0: "summary_proj_to_labels": true, x1000c0s6b0n0: "summary_type": "cls_index", x1000c0s6b0n0: "summary_use_proj": true, x1000c0s6b0n0: "task_specific_params": { x1000c0s6b0n0: "text-generation": { x1000c0s6b0n0: "do_sample": true, x1000c0s6b0n0: "max_length": 50 x1000c0s6b0n0: } x1000c0s6b0n0: }, x1000c0s6b0n0: "transformers_version": "4.31.0.dev0", x1000c0s6b0n0: "use_cache": true, x1000c0s6b0n0: "vocab_size": 50257 x1000c0s6b0n0: } x1000c0s6b0n0: x1000c0s6b0n0: [INFO|modeling_utils.py:2603] 2023-07-16 11:23:52,218 >> loading weights file model.safetensors from cache at /home/users/industry/dso/lannliat/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/model.safetensors x1000c1s1b0n0: [INFO|tokenization_utils_base.py:1843] 2023-07-16 11:23:52,571 >> loading file vocab.json from cache at /home/users/industry/dso/lannliat/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/vocab.json x1000c1s1b0n0: [INFO|tokenization_utils_base.py:1843] 2023-07-16 11:23:52,571 >> loading file merges.txt from cache at /home/users/industry/dso/lannliat/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/merges.txt x1000c1s1b0n0: [INFO|tokenization_utils_base.py:1843] 2023-07-16 11:23:52,571 >> loading file tokenizer.json from cache at /home/users/industry/dso/lannliat/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/tokenizer.json x1000c1s1b0n0: [INFO|tokenization_utils_base.py:1843] 2023-07-16 11:23:52,571 >> loading file added_tokens.json from cache at None x1000c1s1b0n0: [INFO|tokenization_utils_base.py:1843] 2023-07-16 11:23:52,571 >> loading file special_tokens_map.json from cache at None x1000c1s1b0n0: [INFO|tokenization_utils_base.py:1843] 2023-07-16 11:23:52,571 >> loading file tokenizer_config.json from cache at None x1000c1s1b0n0: [INFO|configuration_utils.py:712] 2023-07-16 11:23:52,573 >> loading configuration file config.json from cache at /home/users/industry/dso/lannliat/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/config.json x1000c1s1b0n0: [INFO|configuration_utils.py:768] 2023-07-16 11:23:52,574 >> Model config GPT2Config { x1000c1s1b0n0: "_name_or_path": "gpt2", x1000c1s1b0n0: "activation_function": "gelu_new", x1000c1s1b0n0: "architectures": [ x1000c1s1b0n0: "GPT2LMHeadModel" x1000c1s1b0n0: ], x1000c1s1b0n0: "attn_pdrop": 0.1, x1000c1s1b0n0: "bos_token_id": 50256, x1000c1s1b0n0: "embd_pdrop": 0.1, x1000c1s1b0n0: "eos_token_id": 50256, x1000c1s1b0n0: "initializer_range": 0.02, x1000c1s1b0n0: "layer_norm_epsilon": 1e-05, x1000c1s1b0n0: "model_type": "gpt2", x1000c1s1b0n0: "n_ctx": 1024, x1000c1s1b0n0: "n_embd": 768, x1000c1s1b0n0: "n_head": 12, x1000c1s1b0n0: "n_inner": null, x1000c1s1b0n0: "n_layer": 12, x1000c1s1b0n0: "n_positions": 1024, x1000c1s1b0n0: "reorder_and_upcast_attn": false, x1000c1s1b0n0: "resid_pdrop": 0.1, x1000c1s1b0n0: "scale_attn_by_inverse_layer_idx": false, x1000c1s1b0n0: "scale_attn_weights": true, x1000c1s1b0n0: "summary_activation": null, x1000c1s1b0n0: "summary_first_dropout": 0.1, x1000c1s1b0n0: "summary_proj_to_labels": true, x1000c1s1b0n0: "summary_type": "cls_index", x1000c1s1b0n0: "summary_use_proj": true, x1000c1s1b0n0: "task_specific_params": { x1000c1s1b0n0: "text-generation": { x1000c1s1b0n0: "do_sample": true, x1000c1s1b0n0: "max_length": 50 x1000c1s1b0n0: } x1000c1s1b0n0: }, x1000c1s1b0n0: "transformers_version": "4.31.0.dev0", x1000c1s1b0n0: "use_cache": true, x1000c1s1b0n0: "vocab_size": 50257 x1000c1s1b0n0: } x1000c1s1b0n0: x1000c1s1b0n0: [INFO|modeling_utils.py:2603] 2023-07-16 11:23:52,649 >> loading weights file model.safetensors from cache at /home/users/industry/dso/lannliat/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/model.safetensors x1000c1s1b0n0: [INFO|modeling_utils.py:2694] 2023-07-16 11:23:52,754 >> Detected DeepSpeed ZeRO-3: activating zero.init() for this model x1000c0s6b0n0: [INFO|modeling_utils.py:2694] 2023-07-16 11:23:52,754 >> Detected DeepSpeed ZeRO-3: activating zero.init() for this model x1000c1s1b0n0: [INFO|configuration_utils.py:599] 2023-07-16 11:23:52,757 >> Generate config GenerationConfig { x1000c1s1b0n0: "_from_model_config": true, x1000c1s1b0n0: "bos_token_id": 50256, x1000c1s1b0n0: "eos_token_id": 50256, x1000c1s1b0n0: "transformers_version": "4.31.0.dev0" x1000c1s1b0n0: } x1000c1s1b0n0: x1000c0s6b0n0: [INFO|configuration_utils.py:599] 2023-07-16 11:23:52,759 >> Generate config GenerationConfig { x1000c0s6b0n0: "_from_model_config": true, x1000c0s6b0n0: "bos_token_id": 50256, x1000c0s6b0n0: "eos_token_id": 50256, x1000c0s6b0n0: "transformers_version": "4.31.0.dev0" x1000c0s6b0n0: } x1000c0s6b0n0: x1000c0s6b0n0: [2023-07-16 11:23:58,743] [INFO] [partition_parameters.py:453:__exit__] finished initializing model with 0.16B parameters x1000c1s1b0n0: [INFO|modeling_utils.py:3329] 2023-07-16 11:23:59,764 >> All model checkpoint weights were used when initializing GPT2LMHeadModel. x1000c1s1b0n0: x1000c0s6b0n0: [INFO|modeling_utils.py:3329] 2023-07-16 11:23:59,764 >> All model checkpoint weights were used when initializing GPT2LMHeadModel. x1000c1s1b0n0: [INFO|modeling_utils.py:3337] 2023-07-16 11:23:59,764 >> All the weights of GPT2LMHeadModel were initialized from the model checkpoint at gpt2. x1000c0s6b0n0: x1000c1s1b0n0: If your task is similar to the task the model of the checkpoint was trained on, you can already use GPT2LMHeadModel for predictions without further training. x1000c0s6b0n0: [INFO|modeling_utils.py:3337] 2023-07-16 11:23:59,764 >> All the weights of GPT2LMHeadModel were initialized from the model checkpoint at gpt2. x1000c0s6b0n0: If your task is similar to the task the model of the checkpoint was trained on, you can already use GPT2LMHeadModel for predictions without further training. x1000c0s6b0n0: [INFO|configuration_utils.py:561] 2023-07-16 11:24:00,009 >> loading configuration file generation_config.json from cache at /home/users/industry/dso/lannliat/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/generation_config.json x1000c0s6b0n0: [INFO|configuration_utils.py:599] 2023-07-16 11:24:00,009 >> Generate config GenerationConfig { x1000c0s6b0n0: "_from_model_config": true, x1000c0s6b0n0: "bos_token_id": 50256, x1000c0s6b0n0: "eos_token_id": 50256, x1000c0s6b0n0: "transformers_version": "4.31.0.dev0" x1000c0s6b0n0: } x1000c0s6b0n0: x1000c1s1b0n0: [INFO|configuration_utils.py:561] 2023-07-16 11:24:00,009 >> loading configuration file generation_config.json from cache at /home/users/industry/dso/lannliat/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/generation_config.json x1000c1s1b0n0: [INFO|configuration_utils.py:599] 2023-07-16 11:24:00,010 >> Generate config GenerationConfig { x1000c1s1b0n0: "_from_model_config": true, x1000c1s1b0n0: "bos_token_id": 50256, x1000c1s1b0n0: "eos_token_id": 50256, x1000c1s1b0n0: "transformers_version": "4.31.0.dev0" x1000c1s1b0n0: } x1000c1s1b0n0: x1000c1s1b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-44ec8a7bce9ef049.arrow x1000c1s1b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-721cda7e77511ffc.arrow x1000c0s6b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-44ec8a7bce9ef049.arrow x1000c0s6b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-721cda7e77511ffc.arrow x1000c1s1b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-49f826bee8b8a100.arrow x1000c0s6b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-49f826bee8b8a100.arrow x1000c1s1b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-c87119ecc69384c8.arrow x1000c0s6b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-c87119ecc69384c8.arrow x1000c1s1b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-44ec8a7bce9ef049.arrow x1000c0s6b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-44ec8a7bce9ef049.arrow x1000c1s1b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-721cda7e77511ffc.arrow x1000c1s1b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-49f826bee8b8a100.arrow x1000c0s6b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-721cda7e77511ffc.arrow x1000c0s6b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-49f826bee8b8a100.arrow x1000c1s1b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-acd38b65189dc44f.arrow x1000c0s6b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-acd38b65189dc44f.arrow x1000c1s1b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-c1cf316f13c4acf5.arrow x1000c0s6b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-c1cf316f13c4acf5.arrow x1000c1s1b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-c87119ecc69384c8.arrow x1000c0s6b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-c87119ecc69384c8.arrow x1000c1s1b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-acd38b65189dc44f.arrow x1000c0s6b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-acd38b65189dc44f.arrow x1000c1s1b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-c1cf316f13c4acf5.arrow x1000c0s6b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-c1cf316f13c4acf5.arrow x1000c0s6b0n0: [2023-07-16 11:24:01,734] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed info: version=0.9.5, git-hash=unknown, git-branch=unknown x1000c0s6b0n0: [2023-07-16 11:24:02,277] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False x1000c0s6b0n0: [WARNING] cpu_adam cuda is missing or is incompatible with installed torch, only cpu ops can be compiled! x1000c0s6b0n0: Using /home/users/industry/dso/lannliat/.cache/torch_extensions/py310_cu116 as PyTorch extensions root... x1000c1s1b0n0: [WARNING] cpu_adam cuda is missing or is incompatible with installed torch, only cpu ops can be compiled! x1000c1s1b0n0: Using /home/users/industry/dso/lannliat/.cache/torch_extensions/py310_cu116 as PyTorch extensions root... x1000c1s1b0n0: [WARNING] cpu_adam cuda is missing or is incompatible with installed torch, only cpu ops can be compiled! x1000c1s1b0n0: Using /home/users/industry/dso/lannliat/.cache/torch_extensions/py310_cu116 as PyTorch extensions root... x1000c1s1b0n0: Traceback (most recent call last): x1000c1s1b0n0: File "/home/users/industry/dso/lannliat/transformers/examples/pytorch/language-modeling/run_clm.py", line 634, in <module> x1000c1s1b0n0: main() x1000c1s1b0n0: File "/home/users/industry/dso/lannliat/transformers/examples/pytorch/language-modeling/run_clm.py", line 582, in main x1000c1s1b0n0: train_result = trainer.train(resume_from_checkpoint=checkpoint) x1000c1s1b0n0: File "/home/users/industry/dso/lannliat/transformers/src/transformers/trainer.py", line 1539, in train x1000c1s1b0n0: return inner_training_loop( x1000c1s1b0n0: File "/home/users/industry/dso/lannliat/transformers/src/transformers/trainer.py", line 1659, in _inner_training_loop x1000c1s1b0n0: model, self.optimizer, self.lr_scheduler = self.accelerator.prepare( x1000c1s1b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/accelerate/accelerator.py", line 1198, in prepare x1000c1s1b0n0: result = self._prepare_deepspeed(*args) x1000c1s1b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/accelerate/accelerator.py", line 1537, in _prepare_deepspeed x1000c1s1b0n0: engine, optimizer, _, lr_scheduler = deepspeed.initialize(**kwargs) x1000c1s1b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/__init__.py", line 165, in initialize x1000c1s1b0n0: engine = DeepSpeedEngine(args=args, x1000c1s1b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 309, in __init__ x1000c1s1b0n0: self._configure_optimizer(optimizer, model_parameters) x1000c1s1b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1173, in _configure_optimizer x1000c1s1b0n0: basic_optimizer = self._configure_basic_optimizer(model_parameters) x1000c1s1b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1229, in _configure_basic_optimizer x1000c1s1b0n0: optimizer = DeepSpeedCPUAdam(model_parameters, x1000c1s1b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/ops/adam/cpu_adam.py", line 94, in __init__ x1000c1s1b0n0: self.ds_opt_adam = CPUAdamBuilder().load() x1000c1s1b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/ops/op_builder/builder.py", line 454, in load x1000c1s1b0n0: return self.jit_load(verbose) x1000c1s1b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/ops/op_builder/builder.py", line 497, in jit_load x1000c1s1b0n0: op_module = load(name=self.name, x1000c1s1b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1284, in load x1000c1s1b0n0: return _jit_compile( x1000c1s1b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1508, in _jit_compile x1000c1s1b0n0: _write_ninja_file_and_build_library( x1000c1s1b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1592, in _write_ninja_file_and_build_library x1000c1s1b0n0: verify_ninja_availability() x1000c1s1b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1648, in verify_ninja_availability x1000c1s1b0n0: raise RuntimeError("Ninja is required to load C++ extensions") x1000c1s1b0n0: RuntimeError: Ninja is required to load C++ extensions x1000c0s6b0n0: Traceback (most recent call last): x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/transformers/examples/pytorch/language-modeling/run_clm.py", line 634, in <module> x1000c0s6b0n0: main() x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/transformers/examples/pytorch/language-modeling/run_clm.py", line 582, in main x1000c0s6b0n0: train_result = trainer.train(resume_from_checkpoint=checkpoint) x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/transformers/src/transformers/trainer.py", line 1539, in train x1000c0s6b0n0: return inner_training_loop( x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/transformers/src/transformers/trainer.py", line 1659, in _inner_training_loop x1000c0s6b0n0: model, self.optimizer, self.lr_scheduler = self.accelerator.prepare( x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/accelerate/accelerator.py", line 1198, in prepare x1000c0s6b0n0: result = self._prepare_deepspeed(*args) x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/accelerate/accelerator.py", line 1537, in _prepare_deepspeed x1000c0s6b0n0: engine, optimizer, _, lr_scheduler = deepspeed.initialize(**kwargs) x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/__init__.py", line 165, in initialize x1000c0s6b0n0: engine = DeepSpeedEngine(args=args, x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 309, in __init__ x1000c0s6b0n0: self._configure_optimizer(optimizer, model_parameters) x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1173, in _configure_optimizer x1000c0s6b0n0: basic_optimizer = self._configure_basic_optimizer(model_parameters) x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1229, in _configure_basic_optimizer x1000c0s6b0n0: optimizer = DeepSpeedCPUAdam(model_parameters, x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/ops/adam/cpu_adam.py", line 94, in __init__ x1000c0s6b0n0: self.ds_opt_adam = CPUAdamBuilder().load() x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/ops/op_builder/builder.py", line 454, in load x1000c0s6b0n0: return self.jit_load(verbose) x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/ops/op_builder/builder.py", line 497, in jit_load x1000c0s6b0n0: op_module = load(name=self.name, x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1284, in load x1000c0s6b0n0: return _jit_compile( x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1508, in _jit_compile x1000c0s6b0n0: _write_ninja_file_and_build_library( x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1592, in _write_ninja_file_and_build_library x1000c0s6b0n0: verify_ninja_availability() x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1648, in verify_ninja_availability x1000c0s6b0n0: raise RuntimeError("Ninja is required to load C++ extensions") x1000c0s6b0n0: RuntimeError: Ninja is required to load C++ extensions x1000c1s1b0n0: Loading extension module cpu_adam... x1000c1s1b0n0: Time to load cpu_adam op: 0.502467155456543 seconds x1000c1s1b0n0: Exception ignored in: <function DeepSpeedCPUAdam.__del__ at 0x7f42085dca60> x1000c1s1b0n0: Traceback (most recent call last): x1000c1s1b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/ops/adam/cpu_adam.py", line 102, in __del__ x1000c1s1b0n0: self.ds_opt_adam.destroy_adam(self.opt_id) x1000c1s1b0n0: AttributeError: 'DeepSpeedCPUAdam' object has no attribute 'ds_opt_adam' x1000c0s6b0n0: [WARNING] cpu_adam cuda is missing or is incompatible with installed torch, only cpu ops can be compiled! x1000c0s6b0n0: Using /home/users/industry/dso/lannliat/.cache/torch_extensions/py310_cu116 as PyTorch extensions root... x1000c0s6b0n0: Traceback (most recent call last): x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/transformers/examples/pytorch/language-modeling/run_clm.py", line 634, in <module> x1000c0s6b0n0: main() x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/transformers/examples/pytorch/language-modeling/run_clm.py", line 582, in main x1000c0s6b0n0: train_result = trainer.train(resume_from_checkpoint=checkpoint) x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/transformers/src/transformers/trainer.py", line 1539, in train x1000c0s6b0n0: return inner_training_loop( x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/transformers/src/transformers/trainer.py", line 1659, in _inner_training_loop x1000c0s6b0n0: model, self.optimizer, self.lr_scheduler = self.accelerator.prepare( x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/accelerate/accelerator.py", line 1198, in prepare x1000c0s6b0n0: result = self._prepare_deepspeed(*args) x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/accelerate/accelerator.py", line 1537, in _prepare_deepspeed x1000c0s6b0n0: engine, optimizer, _, lr_scheduler = deepspeed.initialize(**kwargs) x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/__init__.py", line 165, in initialize x1000c0s6b0n0: engine = DeepSpeedEngine(args=args, x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 309, in __init__ x1000c0s6b0n0: self._configure_optimizer(optimizer, model_parameters) x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1173, in _configure_optimizer x1000c0s6b0n0: basic_optimizer = self._configure_basic_optimizer(model_parameters) x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1229, in _configure_basic_optimizer x1000c0s6b0n0: optimizer = DeepSpeedCPUAdam(model_parameters, x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/ops/adam/cpu_adam.py", line 94, in __init__ x1000c0s6b0n0: self.ds_opt_adam = CPUAdamBuilder().load() x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/ops/op_builder/builder.py", line 454, in load x1000c0s6b0n0: return self.jit_load(verbose) x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/ops/op_builder/builder.py", line 497, in jit_load x1000c0s6b0n0: op_module = load(name=self.name, x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1284, in load x1000c0s6b0n0: return _jit_compile( x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1508, in _jit_compile x1000c0s6b0n0: _write_ninja_file_and_build_library( x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1592, in _write_ninja_file_and_build_library x1000c0s6b0n0: verify_ninja_availability() x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1648, in verify_ninja_availability x1000c0s6b0n0: raise RuntimeError("Ninja is required to load C++ extensions") x1000c0s6b0n0: RuntimeError: Ninja is required to load C++ extensions x1000c0s6b0n0: Exception ignored in: <function DeepSpeedCPUAdam.__del__ at 0x7eff0daaca60> x1000c0s6b0n0: Traceback (most recent call last): x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/ops/adam/cpu_adam.py", line 102, in __del__ x1000c0s6b0n0: self.ds_opt_adam.destroy_adam(self.opt_id) x1000c0s6b0n0: AttributeError: 'DeepSpeedCPUAdam' object has no attribute 'ds_opt_adam' x1000c0s6b0n0: Exception ignored in: <function DeepSpeedCPUAdam.__del__ at 0x7f0145cb8a60> x1000c0s6b0n0: Traceback (most recent call last): x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/ops/adam/cpu_adam.py", line 102, in __del__ x1000c0s6b0n0: self.ds_opt_adam.destroy_adam(self.opt_id) x1000c0s6b0n0: AttributeError: 'DeepSpeedCPUAdam' object has no attribute 'ds_opt_adam' x1000c1s1b0n0: [2023-07-16 11:24:04,470] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 2431494 x1000c0s6b0n0: [2023-07-16 11:24:04,784] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 665498 x1000c0s6b0n0: [2023-07-16 11:24:04,804] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 665499 ``` When I tried single node training, the training procedes smoothly: ``` deepspeed --num_gpus=2 $ROOT_DIR/transformers/examples/pytorch/language-modeling/run_clm.py \ --model_name_or_path gpt2 \ --dataset_name wikitext \ --dataset_config_name wikitext-2-raw-v1 \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 8 \ --do_train \ --do_eval \ --output_dir ~/scratch/finetuned_models/debug \ --deepspeed "$ROOT_DIR/FastChat/fastchat/ds_config/ds_config_zero3.json" ``` ### Expected behavior Expected training to proceed smoothly as in single node case.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24844/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24844/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24843
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24843/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24843/comments
https://api.github.com/repos/huggingface/transformers/issues/24843/events
https://github.com/huggingface/transformers/issues/24843
1,806,328,360
I_kwDOCUB6oc5rqmIo
24,843
RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)`
{ "login": "anujsahani01", "id": 83875986, "node_id": "MDQ6VXNlcjgzODc1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/83875986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anujsahani01", "html_url": "https://github.com/anujsahani01", "followers_url": "https://api.github.com/users/anujsahani01/followers", "following_url": "https://api.github.com/users/anujsahani01/following{/other_user}", "gists_url": "https://api.github.com/users/anujsahani01/gists{/gist_id}", "starred_url": "https://api.github.com/users/anujsahani01/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anujsahani01/subscriptions", "organizations_url": "https://api.github.com/users/anujsahani01/orgs", "repos_url": "https://api.github.com/users/anujsahani01/repos", "events_url": "https://api.github.com/users/anujsahani01/events{/privacy}", "received_events_url": "https://api.github.com/users/anujsahani01/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I was able to resolve the issue by following changes\r\n\r\nFirstly i added the special tokens that were used in my prompt card.\r\n```\r\nspecial_token_dict = tokenizer.special_tokens_map\r\ntokenizer.add_special_tokens(special_token_dict)\r\n```\r\n\r\nThen changed this line of code \r\n``` \r\nmodel.resize_token_embeddings(num_additional_token + tokenizer.vocab_size)\r\n```\r\n\r\nto this\r\n```\r\nmodel.resize_token_embeddings(len(tokenizer))\r\n```" ]
1,689
1,689
1,689
NONE
null
Quantizing HuggingFaceH4/starchat-alpha model using transformers BitsAndBytesConfig and LoRA Config for reducing the memory usage. ``` nf4_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_use_double_quant=True, bnb_4bit_compute_dtype=torch.bfloat16) model = AutoModelForCausalLM.from_pretrained(model_id, config = config, quantization_config=nf4_config, torch_dtype=torch.bfloat16,) model.resize_token_embeddings(num_additional_token + tokenizer.vocab_size) ``` tried to reduce the model_max_length: ``` tokenizer = AutoTokenizer.from_pretrained("HuggingFaceH4/starchat-alpha", additional_special_tokens = ["[SYSTEM]", "[ASSISTANT]", "[USER]", "[END]"], pad_token = "[PAD]", model_max_length = 1000, return_token_type_ids=False) ``` Used LoRA configuration: ``` from peft import LoraConfig, get_peft_model model.gradient_checkpointing_enable() model = prepare_model_for_kbit_training(model) lora_config = LoraConfig( r=8, lora_alpha=32, target_modules=["c_attn"], lora_dropout=0.05, bias="none", task_type="CAUSAL_LM" ) model = get_peft_model(model, lora_config) model.config.use_cache = False ``` Using the LoRA configuration i was able to reduce trainable params to 4,014,080 . Finally encountered the error while training the model using HuggingFace TrainingArguments and Trainer to train the model. ``` training_args = TrainingArguments( output_dir='./NeuralCodeBot_starchat', # output directory num_train_epochs= 2, # total number of training epochs per_device_train_batch_size=1, # batch size per device during training per_device_eval_batch_size=1, # batch size for evaluation warmup_steps=50, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir='./logs', # directory for storing logs logging_steps=100, learning_rate=1e-3, max_steps = 10000, fp16= True, push_to_hub=True, ) trainer = Trainer( model= model, args=training_args, train_dataset=tokenized_dataset["train"], eval_dataset=tokenized_dataset["test"], data_collator=data_collator, tokenizer=tokenizer, compute_metrics=compute_metrics, ) trainer.train() ``` ## Error Message: ``` --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) [<ipython-input-29-3435b262f1ae>](https://localhost:8080/#) in <cell line: 1>() ----> 1 trainer.train() 30 frames [/usr/local/lib/python3.10/dist-packages/bitsandbytes/autograd/_functions.py](https://localhost:8080/#) in forward(ctx, A, B, out, bias, state) 514 # 1. Dequantize 515 # 2. MatmulnN --> 516 output = torch.nn.functional.linear(A, F.dequantize_4bit(B, state).to(A.dtype).t(), bias) 517 518 # 3. Save state RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)` ``` When running the above code, I encountered an error during training. Unfortunately, I'm unable to provide the exact error message as I couldn't run the code on CPU due to the limitation of BitsAndBytesConfig requiring GPUs. However, I would like to request assistance from the community in resolving this issue. If anyone has encountered a similar error while quantizing HuggingFace language models, specifically using BitsAndBytesConfig and Trainer, I would greatly appreciate any suggestions or guidance on how to overcome this issue. Any of your suggestions will be highly appreciated. Thank You in advance.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24843/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24843/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24842
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24842/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24842/comments
https://api.github.com/repos/huggingface/transformers/issues/24842/events
https://github.com/huggingface/transformers/issues/24842
1,806,317,302
I_kwDOCUB6oc5rqjb2
24,842
Request support for RWKV-4-World model.
{ "login": "SetoKaiba", "id": 159540, "node_id": "MDQ6VXNlcjE1OTU0MA==", "avatar_url": "https://avatars.githubusercontent.com/u/159540?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SetoKaiba", "html_url": "https://github.com/SetoKaiba", "followers_url": "https://api.github.com/users/SetoKaiba/followers", "following_url": "https://api.github.com/users/SetoKaiba/following{/other_user}", "gists_url": "https://api.github.com/users/SetoKaiba/gists{/gist_id}", "starred_url": "https://api.github.com/users/SetoKaiba/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SetoKaiba/subscriptions", "organizations_url": "https://api.github.com/users/SetoKaiba/orgs", "repos_url": "https://api.github.com/users/SetoKaiba/repos", "events_url": "https://api.github.com/users/SetoKaiba/events{/privacy}", "received_events_url": "https://api.github.com/users/SetoKaiba/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[ "@sgugger Is it possible to add the support for it? I'd like to use PEFT to fine tune the model.", "We have implemented the function of fine-tuning the RWKV-World model using the peft library, and we will immediately publish it in HF", "The relevant code will be provided under this issue", "Please refer to๏ผš\r\nhttps://github.com/StarRing2022/HF-For-RWKVWorld-LoraAlpaca\r\nhttps://huggingface.co/StarRing2022/RWKV-4-World-7B", "> Please refer to๏ผš https://github.com/StarRing2022/HF-For-RWKVWorld-LoraAlpaca https://huggingface.co/StarRing2022/RWKV-4-World-7B\r\n\r\n@StarRing2022 Thank you for your great work.\r\nWill you make a PR for it? Or will you just keep it a separate repo?\r\nI'd like to see it PR back to hf transformers. And others can get your work easily.\r\n\r\nhttps://github.com/StarRing2022/RingRWKV\r\nI think this should be mentioned in the example github repo as well.", "cc @younesbelkada ", "Thanks๏ผŒthere are indeed some issues with the RWKV in HF format on the official transformers. Recently, we have found that there are issues such as CFG and sample_ Logits, PR is necessary, and we also hope that the official can provide improvements", "I think that hf transformers is working with RWKV world ,just use the ringrwkv tokenizer(or the world tokenizer in the official repo) and it will work", "Yes,We are trying to contact with HF transformers" ]
1,689
1,693
null
NONE
null
### Model description As RWKV-4-World is using a different tokenizer and vocabs, the current RWKV support in transformers is incompatible. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation https://huggingface.co/StarRing2022/RWKV-4-World-1.5B @StarRing2022
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24842/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24842/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/24841
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24841/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24841/comments
https://api.github.com/repos/huggingface/transformers/issues/24841/events
https://github.com/huggingface/transformers/issues/24841
1,806,314,076
I_kwDOCUB6oc5rqipc
24,841
Support for caching prompt hidden states through multiple calls of `generate()`
{ "login": "offendo", "id": 29783125, "node_id": "MDQ6VXNlcjI5NzgzMTI1", "avatar_url": "https://avatars.githubusercontent.com/u/29783125?v=4", "gravatar_id": "", "url": "https://api.github.com/users/offendo", "html_url": "https://github.com/offendo", "followers_url": "https://api.github.com/users/offendo/followers", "following_url": "https://api.github.com/users/offendo/following{/other_user}", "gists_url": "https://api.github.com/users/offendo/gists{/gist_id}", "starred_url": "https://api.github.com/users/offendo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/offendo/subscriptions", "organizations_url": "https://api.github.com/users/offendo/orgs", "repos_url": "https://api.github.com/users/offendo/repos", "events_url": "https://api.github.com/users/offendo/events{/privacy}", "received_events_url": "https://api.github.com/users/offendo/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
closed
false
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[ { "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false } ]
[ "cc @gante ", "Hey @offendo ๐Ÿ‘‹ \r\n\r\nThis is a relevant request, and one that I can't give an exact solution at the moment. In general, `generate()` + prompting is very manual at the moment, and we really want to improve it. As such, the solution to your proposal will depend on our next iteration of prompt-handling!\r\n\r\nI'm assigning the issue to me, and I'll keep you posted ๐Ÿค— \r\n\r\n(cc @patrickvonplaten -- this is related to our brainstorming session yesterday, about prompting)", "This feature would be useful not only for long prompts, but also for incrementally building dialogs / conversations without recomputing the generated parts. And I think there's a simple solution: just expose `past_key_values` in the various `generate` functions, and allow `past_key_values` to be passed in as the parameter for all decoder-only models.", "@gante can't we already pass `past_key_values` to `generate` through `model_kwargs` ? \r\nSo we could pre-encode the prompt into `past_key_values` with a single forward pass and then re-use it in generation already I think. \r\n\r\nSee this similar (old) issue maybe: https://github.com/huggingface/transformers/issues/4368 should help", "@patrickvonplaten Yeah I guess this solves the issue for @offendo , but still `past_key_values` are not being returned with the generated sequences. Hopefully there could be a flag (possibly called `return_past_key_values`) that allows them to be returned.", "@lqf96 absolutely. I'll work on adding that, since I've seen others mentioning its usefulness :)", "Looks like this is being addressed by #25086... Hopefully it can be accepted soon!", "ไฝ ๅฅฝ๏ผŒๆˆ‘ๅทฒ็ปๆŽฅๆ”ถๅˆฐไฝ ็š„้‚ฎไปถ๏ผ", "> @gante can't we already pass `past_key_values` to `generate` through `model_kwargs` ? So we could pre-encode the prompt into `past_key_values` with a single forward pass and then re-use it in generation already I think.\r\n> \r\n> See this similar (old) issue maybe: #4368 should help\r\n\r\nDoes this approach work even with the padding issue? If we encode the common prompt (which has no padding) and then call generate on a batch of values with left padding, the hidden states wonโ€™t align properly and weโ€™d have to do some dynamic padding of the `past_key_values` or something.\r\n\r\nPlease correct me if I am misunderstanding something! \r\n\r\nAnd also, thanks to everyone for their work on this!", "@offendo If the inputs are properly passed, then the position ids can be correctly inferred, which will result in the correct output :) Possibly we may need additional logic to ensure this position id inference can happen seamlessly", "Hi @gante, thanks for working on this commit https://github.com/huggingface/transformers/pull/25086 ! May I know if this feature request is addressed by this commit or do we need to make additional changes on top of it? ", "@louisowen6 definitely needs more work, as it breaks 178 of our tests :)", "Looking forward to this!", "Hey folks ๐Ÿ‘‹ \r\n\r\nIf you install from `main` and add `return_dict_in_generate=True` to `generate`, `past_key_values` will be part of the output, assuming your model is configured with `use_cache=True` (the default).\r\n\r\nYou can then pass `past_key_values` to `generate` to continue generating!", "Here's what I did to get it to work with hidden states generated by `model.forward`:\r\n\r\n```python\r\nimport transformers\r\n\r\nmodel_name = \"meta-llama/Llama-2-7b-chat-hf\"\r\nm = transformers.AutoModelForCausalLM.from_pretrained(model_name, device_map=\"auto\")\r\nt = transformers.AutoTokenizer.from_pretrained(model_name)\r\n\r\nt.padding_side = \"left\"\r\nt.pad_token = t.eos_token\r\ninput_tokens = t([\"Lizargrid: Yer a lizard Harry! Your parents\", \"Lobstargrid: Yer a lobster Harry! Your parents\"], return_tensors=\"pt\", padding=True).to(0)\r\n\r\n\r\nforward_outputs = m(\r\n **input_tokens,\r\n return_dict=True,\r\n)\r\n\r\n# THIS IS THE ONLY IMPORTANT LINE:\r\nforward_outputs.past_key_values = [[y[:, :, :-1] for y in x] for x in forward_outputs.past_key_values]\r\n\r\nnew_gen = m.generate(\r\n input_ids=input_tokens[\"input_ids\"],\r\n past_key_values=forward_outputs.past_key_values,\r\n bos_token_id=t.eos_token_id, \r\n pad_token_id=t.eos_token_id, \r\n max_new_tokens=100, \r\n use_cache=True,\r\n)\r\n\r\nfor text in t.batch_decode(new_gen, skip_special_tokens=False):\r\n print(\"\\n\" + \"#\" * 80 + \"\\n\\n\" + text)\r\n\r\n```", "Basically, `past_key_values` expects to not have a key and a value for the last token, as it expects `past_key_values` to come from generation, so we drop those key and value associated to the last token of our input sequences.\r\n\r\nIt would be fun if `generate` was smarter and we didn't have to do that @gante, basically just detect that if the input and the number of key and value are the same as the number of tokens, that they come from `forward` not `generate`", "@JulesGM To avoid wasting computations, you have the following options:\r\n1. run the forward pass with `input_tokens[\"input_ids\"][:, :-1]`, which is equivalent to doing the prefill step outside `generate`\r\n2. add a token selection step after your forward pass, and append that token to `input_ids`, which would be somewhat close to running one generation step manually\r\n\r\nAutomatic slicing would go against our `generate` design philosophy, where we avoid adding transformations except on the most common use cases ๐Ÿค— In our experience, adding these sorts of automatic input handling leads to problems in future features.", "@gante I agree that silent modifications are generally bad.\r\nI would suggest making it more of a first-grade citizen. The case of having few-shots examples being generated from a million times is just so common. It could be, then, another argument, if it comes from a forward call?", "Just in my lab at Mila in Montreal, multiple people have had this issue", "I really feel like it should be extremely straightforward to do. In my case, I have like 1000 tokens of few-shot examples, and ~100 tokens or so of text to generate. Being able to, in a straightforward way, not duplicate that memory & work (without having to do tricks like ditch the last token) would be a big deal.", "@JulesGM I 100% agree that it is trivial to add from a technical point of view :) \r\n\r\nThe catch is that there are 3 possible solutions, and none is good. In the absence of a good solution, it is by far preferable to continue handling the case outside `generate` -- if we stop operating under this principle, `generate` will quickly become bloated and it will be even harder for us to iterate on it.\r\n\r\nFor completeness, here are the solutions I've considered: (I'm open to new solutions!)\r\n1. Crop the `past_key_valies` outside `generate`, as you suggested in the original snippet (issue = it is a manual task)\r\n2. Automatically crop `past_key_values` if it has the same length as `input_ids` (issue = this condition might be hit in other circumstances, such as when `past_key_values` holds content regarding a past round of a multi-chat conversation, rendering `generate` unusable in those settings)\r\n3. Adding a flag (issue = users have to discover the flag, further bloat on `generate`)\r\n\r\n__________________________________________________________\r\n\r\nIMO, the underlying issue at the moment is that we allow `past_key_values` and `input_ids` to hold both redundant information and mutually exclusive information, preventing us from making assumptions and/or throwing informative exceptions. ", "How about calling them:\r\n`past_key_values_from_generate`\r\n`past_key_values_from_forward`", "Otherwise I feel like just having `past_key_values` & silently checking if it's equal in length to input_ids or if it has one fewer of length is honestly probably better than the alternatives" ]
1,689
1,705
1,698
NONE
null
### Feature request Hi there, I'd like to be able to re-use the hidden states for a common (potentially long) prompt across multiple calls to `model.generate()` in order to reduce redundant computation. Here is how I envision a final API, though I'm sure there are multiple ways to do it. ```python # Load stuff model = AutoModel.from_pretrained('huggyllama/llama-7b') tokenizer = AutoTokenizer.from_pretrained('huggyllama/llama-7b') # Common prompt that we'd prepend to every example prompt = "This is a common prompt in every example." prompt_ids = tokenizer(prompt, return_tensors='pt') # Examples to pass to generate examples = ["Ackbar went to", "Billaba enjoys", "Cody is eating some"] # Generation loop outputs = [] prompt_hidden_state = None for ex in examples: # Current way of doing things out = model.generate( **tokenizer(prompt + ex, return_tensors='pt'), ) # Proposed method to re-use prompt_hidden_state out = model.generate( **tokenizer(x, return_tensors='pt'), common_prompt_ids=prompt_ids, prompt_hidden_state=prompt_hidden_state ) prompt_hidden_state = out.prompt_hidden_state outputs.append(out.sequences) ``` Thanks in advance. ### Motivation A very common pattern for LLM usage is having a common prompt (e.g., instructions and input/output pairs), a sample input, and asking it to generate the sample output. For example: ``` You are a programmer's assistant which converts English descriptions to Python functions. English: <example 1 description> Python: <example 1 function> English: <example 2 description> Python: <example 2 function> English: <example 3 description> Python: <example 3 function> English: <input description> Python: ``` I'd like to be able to cache the common part of the prompt across inputs, that is, everything before `<input description>` which appears in every example to avoid potentially expensive re-computation. ### Your contribution The only existing info I could find is the short discussion [here](https://discuss.huggingface.co/t/avoid-recalculating-hidden-states-between-generate-calls/34209). I tried messing around a bit to get this to work but had little luck. I'm not familiar with the inner-workings of `transformers` and ran into numerous errors. One problem is padding, which if we're using left padding, can cause some misalignment with the prompt hidden states, e.g.: ``` <p> <p> <p> common prompt x_1 x_2 x_3 <p> <p> common prompt x_1 x_2 x_3 x_4 <p> <p> <p> <p> common prompt x_1 x_2 ``` I don't know the best way to solve this. Do we dynamically pad every tensor in `past_key_values`? That seems slow but I don't know if it actually is. If someone can suggest a better/easier way or maybe give some more pointers on how to solve padding. I'd be happy to try again myself. Thanks in advance.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24841/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24841/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24840
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24840/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24840/comments
https://api.github.com/repos/huggingface/transformers/issues/24840/events
https://github.com/huggingface/transformers/issues/24840
1,806,307,423
I_kwDOCUB6oc5rqhBf
24,840
Again: RuntimeError: unscale_() has already been called on this optimizer since the last update()
{ "login": "FartyPants", "id": 23346289, "node_id": "MDQ6VXNlcjIzMzQ2Mjg5", "avatar_url": "https://avatars.githubusercontent.com/u/23346289?v=4", "gravatar_id": "", "url": "https://api.github.com/users/FartyPants", "html_url": "https://github.com/FartyPants", "followers_url": "https://api.github.com/users/FartyPants/followers", "following_url": "https://api.github.com/users/FartyPants/following{/other_user}", "gists_url": "https://api.github.com/users/FartyPants/gists{/gist_id}", "starred_url": "https://api.github.com/users/FartyPants/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FartyPants/subscriptions", "organizations_url": "https://api.github.com/users/FartyPants/orgs", "repos_url": "https://api.github.com/users/FartyPants/repos", "events_url": "https://api.github.com/users/FartyPants/events{/privacy}", "received_events_url": "https://api.github.com/users/FartyPants/received_events", "type": "User", "site_admin": false }
[ { "id": 5616426447, "node_id": "LA_kwDOCUB6oc8AAAABTsPdzw", "url": "https://api.github.com/repos/huggingface/transformers/labels/solved", "name": "solved", "color": "B1D6DC", "default": false, "description": "" } ]
closed
false
null
[]
[ "I have the same errors (I opened an issue). However I don't have the gradient_accumulation_steps larger than the rows divied by per_device_train_batch_size. I mean I have this parameters:\r\nGRADIENT_ACCUMULATION_STEPS = 16\r\nMICRO_BATCH_SIZE = 4\r\ndataset.num_rows = 53131\r\n\r\nCould it be that one of the batch size is less than the micro_batch_size?", "I think this is fixed on the main branch now.\r\ncc @muellerzr and @pacman100 ", "Hello @FartyPants, please see this: https://github.com/huggingface/transformers/issues/24849#issuecomment-1638272113", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,689
1,692
1,692
NONE
null
### System Info - `transformers` version: 4.30.2 - Platform: Windows-10-10.0.19045-SP0 - Python version: 3.10.12 - Huggingface_hub version: 0.16.2 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @sgugger ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I've seen the PR where this was supposed to be fixed. Now I think from my experiments the issue still happens when gradient_accumulation_steps is larger than dataset.num_rows divided by per_device_train_batch_size These are obviously not very good input data and it's kinda obvious that it should blow - but this could happen to people if the dataset is too small and gradient_accumulation_steps set arbitrary - but no relevant info is given just a RuntimeError. So it's kinda bug and kinda user error. On my end the issue can be fixed if I lower the gradient_accumulation_steps so it satisfies the above requirement. I had not looked at the transformers code where this gets to play. This is literally based on my own hunch - if I would forget to safeguard the data, I would make an error there. The debug log File "\env\lib\site-packages\transformers\trainer.py", line 1645, in train return inner_training_loop( File "\env\lib\site-packages\transformers\trainer.py", line 1987, in _inner_training_loop self.accelerator.clip_grad_norm_( File "\env\lib\site-packages\accelerate\accelerator.py", line 1893, in clip_grad_norm_ self.unscale_gradients() File "\env\lib\site-packages\accelerate\accelerator.py", line 1856, in unscale_gradients self.scaler.unscale_(opt) File "env\lib\site-packages\torch\cuda\amp\grad_scaler.py", line 275, in unscale_ raise RuntimeError("unscale_() has already been called on this optimizer since the last update().") RuntimeError: unscale_() has already been called on this optimizer since the last update(). ### Expected behavior safe guard this if it is indeed an issue or give error that tells you why this happens (too high gradient_accumulation_steps for the amount of data) If this is wrong place to bring this, let me know.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24840/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24840/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24839
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24839/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24839/comments
https://api.github.com/repos/huggingface/transformers/issues/24839/events
https://github.com/huggingface/transformers/issues/24839
1,806,237,273
I_kwDOCUB6oc5rqP5Z
24,839
eval_loss of the same set of data differs when using different batch size
{ "login": "namespace-Pt", "id": 61188463, "node_id": "MDQ6VXNlcjYxMTg4NDYz", "avatar_url": "https://avatars.githubusercontent.com/u/61188463?v=4", "gravatar_id": "", "url": "https://api.github.com/users/namespace-Pt", "html_url": "https://github.com/namespace-Pt", "followers_url": "https://api.github.com/users/namespace-Pt/followers", "following_url": "https://api.github.com/users/namespace-Pt/following{/other_user}", "gists_url": "https://api.github.com/users/namespace-Pt/gists{/gist_id}", "starred_url": "https://api.github.com/users/namespace-Pt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/namespace-Pt/subscriptions", "organizations_url": "https://api.github.com/users/namespace-Pt/orgs", "repos_url": "https://api.github.com/users/namespace-Pt/repos", "events_url": "https://api.github.com/users/namespace-Pt/events{/privacy}", "received_events_url": "https://api.github.com/users/namespace-Pt/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[ { "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false } ]
[ "Hi @namespace-Pt \r\n\r\nThank you for reporting.\r\n\r\n This is because this is a causal LM models, where the loss is computed across the non-padding tokens.\r\n\r\nThe loss (returned from the model's `forward`) is the total loss divided by the number of non-padding tokens sent to the model.\r\n\r\nIn your case (4 examples), they have 438, 461, 423 and 183 non-padding tokens, a total of `1505`.\r\n\r\nFor each single example, the (averaged) loss is `2.5674`, `2.7242`, `2.9536` and `2.3945`. Multiplying by the corresponding number of non-padding tokens, we get `1124.5172`, `1255.8704`, `1249.3870` and `438.1949`. Summing them gives the total loss of `4067.9697`. \r\n\r\n\r\nDivided by `1505` (the total number of non-padding tokens in the batch), we get `4067.9697 / 1505 = 2.7031`, which is the loss we get when sending the batch to the model. (There is a slight precision issue above, but it's fine)\r\n\r\nThis is known and not an real issue. However, if you want to have full control, you can call model's forward without `labels` and compute it in your own code.\r\n", "There is a more detailed discussion\r\n\r\nhttps://github.com/huggingface/transformers/issues/24725", "Got it. Thank you. So it should not be macro-average." ]
1,689
1,690
1,690
CONTRIBUTOR
null
### System Info - `transformers` version: 4.30.0 - Platform: Linux-5.4.0-147-generic-x86_64-with-glibc2.31 - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction `eval_loss` of **the same set of data** from **the same model** (`gpt-neo`, `flan-t5`, `llama`...) **differs** when using different batch size. ```python from transformers import AutoModel, AutoTokenizer, AutoModelForCausalLM, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-125m") model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-neo-125m", padding_side="left") # bug also happens on flan-t5 # tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-large") # model = AutoModelForSeq2SeqLM.from_pretrained("google/flan-t5-large") # set pad token if tokenizer.pad_token is None: tokenizer.pad_token = tokenizer.eos_token # For the following inputs, eval_loss is different when using different batch size samples = [ "Sheldon: So if a photon is directed through a plane with two slits in it and either slit is observed it will not go through both slits. If it's unobserved it will, however, if it's observed after it's left the plane but before it hits its target, it will not have gone through both slits. Leonard: Agreed, what's your point? Sheldon: There's no point, I just think it's a good idea for a tee-shirt. Leonard: Excuse me? Receptionist: Hang on. Leonard: One across is Aegean, eight down is Nabakov, twenty-six across is MCM, fourteen down isโ€ฆ move your fingerโ€ฆ phylum, which makes fourteen across Port-au-Prince. See, Papa Doc's capital idea, that's Port-au-Prince. Haiti. Receptionist: Can I help you? Leonard: Yes. Um, is this the High IQ sperm bank? Receptionist: If you have to ask, maybe you shouldn't be here. Sheldon: I think this is the place. Receptionist: Fill these out. Leonard: Thank-you. We'll be right back. Receptionist: Oh, take your time. I'll just finish my crossword puzzle. Oh wait. (They sit and begin to fill in forms). Sheldon: Leonard, I don't think I can do this. Leonard: What, are you kidding? You're a semi-pro. Sheldon: No. We are committing genetic fraud. There's no guarantee that our sperm is going to generate high IQ offspring, think about that. Sheldon: So if a photon is directed through a plane with two slits in it and either slit is observed it will not go through both slits. If it's unobserved it will, however, if it's observed after it's left the plane but before it hits its target, it will not have gone through both slits. Leonard: Agreed, what's your point? Sheldon: There's no point, I just think it's a good idea for a tee-shirt. Leonard: Excuse me?", "Sheldon: Are you still mad about the sperm bank? Leonard: No. Sheldon: You want to hear an interesting thing about stairs? Leonard: Not really. Sheldon: If the height of a single step is off by as little as two millimetres, most people will trip. Leonard: I don't care. Two millimetres? That doesn't seem right. Sheldon: No, it's true, I did a series of experiments when I was twelve, my father broke his clavicle. Leonard: Is that why they sent you to boarding school? Sheldon: No, that was the result of my work with lasers. Leonard: New neighbour? Sheldon: Evidently. Leonard: Significant improvement over the old neighbour. Sheldon: Two hundred pound transvestite with a skin condition, yes she is. Penny: Oh, hi! Leonard: Hi. Sheldon: Hi. Leonard: Hi. Sheldon: Hi. Penny: Hi? Leonard: We don't mean to interrupt, we live across the hall. Penny: Oh, that's nice. Leonard: Ohโ€ฆ uhโ€ฆ noโ€ฆ we don't live togetherโ€ฆ umโ€ฆ we live together but in separate, heterosexual bedrooms. Penny: Oh, okay, well, guess I'm your new neighbour, Penny. Leonard: Leonard, Sheldon. Penny: Hi. Leonard: Hi. Sheldon: Hi. Penny: Hi. Leonard: Hi. Well, uh, oh, welcome to the building. Penny: Thankyou, maybe we can have coffee sometime. Leonard: Oh, great. Penny: Great. Sheldon: Great. Leonard: Great. Well, bye. Penny: Bye. Sheldon: Bye. Leonard: Bye. Leonard: Should we have invited her for lunch? Sheldon: No. We're going to start Season Two of Battlestar Galactica. Leonard: We already watched the Season Two DVDs. Sheldon: Not with commentary. Leonard: I think we should be good neighbours, invite her over, make her feel welcome. Sheldon: We never invited Louis-slash-Louise over. Leonard: Well, then that was wrong of us. Sheldon: Are you still mad about the sperm bank? Leonard: No. Sheldon: You want to hear an interesting thing about stairs? Leonard: Not really.", "Leonard: Okay, well, make yourself at home. Penny: Okay, thankyou. Leonard: You're very welcome. Penny: This looks like some serious stuff, Leonard, did you do this? Sheldon: Actually that's my work. Penny: Wow. Sheldon: Yeah, well, it's just some quantum mechanics, with a little string theory doodling around the edges. That part there, that's just a joke, it's a spoof of the Bourne-Oppenheimer approximation. Penny: So you're like, one of those, beautiful mind genius guys. Sheldon: Yeah. Penny: This is really impressive. Leonard: I have a board. If you like boards, this is my board. Penny: Holy smokes. Sheldon: If by holy smokes you mean a derivative restatement of the kind of stuff you can find scribbled on the wall of any men's room at MIT, sure. Leonard: What? Sheldon: Oh, come on. Who hasn't seen this differential below โ€œhere I sit broken hearted?โ€ Leonard: At least I didn't have to invent twenty-six dimensions just to make the math come out. Sheldon: I didn't invent them, they're there. Leonard: In what universe? Sheldon: In all of them, that is the point. Penny: Uh, do you guys mind if I start? Sheldon: Um, Penny, that's where I sit. Penny: So, sit next to me. Sheldon: No, I sit there. Penny: What's the difference? Sheldon: What's the difference? Leonard: Here we go. Sheldon: In the winter that seat is close enough to the radiator to remain warm, and yet not so close as to cause perspiration. In the summer it's directly in the path of a cross breeze created by open windows there, and there. Leonard: Okay, well, make yourself at home. Penny: Okay, thankyou. Leonard: You're very welcome. Penny: This looks like some serious stuff, Leonard, did you do this?", "Leonard: Uh, there it goes, it sticks, I'm sorry. Penny: Okay. Thanks. Leonard: You're welcome, oh, you're going to step right, okay, I'llโ€ฆ. Penny: Hey, Leonard? Leonard: The hair products are Sheldon's. Penny: Um, okay. Can I ask you a favour. Leonard: A favour? Sure, you can ask me a favour, I would do you a favour for you. Penny: It's okay if you say no. Leonard: Oh, I'll probably say yes. Penny: It's just not the kind of thing you ask a guy you've just met. Leonard: Wow. Leonard: Uh, there it goes, it sticks, I'm sorry. Penny: Okay. Thanks. Leonard: You're welcome, oh, you're going to step right, okay, I'llโ€ฆ. Penny: Hey, Leonard?" ] model.eval() with torch.no_grad(): # feed all data in one batch all_batch_samples = tokenizer(samples, return_tensors="pt", padding="max_length", max_length=480, truncation=True) labels = all_batch_samples["input_ids"].clone() labels[labels == tokenizer.pad_token_id] = -100 all_batch_samples["labels"] = labels outputs = model(**all_batch_samples) all_loss = outputs.loss # feed one data sample per batch (batch size is 1) losses = [] for i in range(len(all_batch_samples["input_ids"])): batch_samples = tokenizer(samples[i], return_tensors="pt", padding="max_length", max_length=480, truncation=True) labels = batch_samples["input_ids"].clone() labels[labels == tokenizer.pad_token_id] = -100 batch_samples["labels"] = labels for k, v in batch_samples.items(): # always true assert (all_batch_samples[k][i] == batch_samples[k]).all() losses.append(model(**batch_samples).loss) losses = torch.stack(losses) print(f"BS=1: {losses.mean()}", "*"*5, f"BS=all: {all_loss}", "*"*5, f"Losses: {losses}") # BS=1: 3.6513803005218506 ***** BS=all: 3.6280925273895264 ***** Losses: tensor([3.5703, 3.4178, 3.8621, 3.7554]) ``` ### Expected behavior I think the loss should be exactly the same with different batch sizes. I wonder why the deviation happens.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24839/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24839/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24838
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24838/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24838/comments
https://api.github.com/repos/huggingface/transformers/issues/24838/events
https://github.com/huggingface/transformers/pull/24838
1,805,972,382
PR_kwDOCUB6oc5VlBBv
24,838
Ko perf infer gpu many
{ "login": "heuristicwave", "id": 31366038, "node_id": "MDQ6VXNlcjMxMzY2MDM4", "avatar_url": "https://avatars.githubusercontent.com/u/31366038?v=4", "gravatar_id": "", "url": "https://api.github.com/users/heuristicwave", "html_url": "https://github.com/heuristicwave", "followers_url": "https://api.github.com/users/heuristicwave/followers", "following_url": "https://api.github.com/users/heuristicwave/following{/other_user}", "gists_url": "https://api.github.com/users/heuristicwave/gists{/gist_id}", "starred_url": "https://api.github.com/users/heuristicwave/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/heuristicwave/subscriptions", "organizations_url": "https://api.github.com/users/heuristicwave/orgs", "repos_url": "https://api.github.com/users/heuristicwave/repos", "events_url": "https://api.github.com/users/heuristicwave/events{/privacy}", "received_events_url": "https://api.github.com/users/heuristicwave/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I'll create a PR again using the Korean translation team's PR template." ]
1,689
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24838/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24838/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24838", "html_url": "https://github.com/huggingface/transformers/pull/24838", "diff_url": "https://github.com/huggingface/transformers/pull/24838.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24838.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24837
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24837/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24837/comments
https://api.github.com/repos/huggingface/transformers/issues/24837/events
https://github.com/huggingface/transformers/pull/24837
1,805,923,768
PR_kwDOCUB6oc5Vk3um
24,837
Remove deprecated codes
{ "login": "statelesshz", "id": 28150734, "node_id": "MDQ6VXNlcjI4MTUwNzM0", "avatar_url": "https://avatars.githubusercontent.com/u/28150734?v=4", "gravatar_id": "", "url": "https://api.github.com/users/statelesshz", "html_url": "https://github.com/statelesshz", "followers_url": "https://api.github.com/users/statelesshz/followers", "following_url": "https://api.github.com/users/statelesshz/following{/other_user}", "gists_url": "https://api.github.com/users/statelesshz/gists{/gist_id}", "starred_url": "https://api.github.com/users/statelesshz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/statelesshz/subscriptions", "organizations_url": "https://api.github.com/users/statelesshz/orgs", "repos_url": "https://api.github.com/users/statelesshz/repos", "events_url": "https://api.github.com/users/statelesshz/events{/privacy}", "received_events_url": "https://api.github.com/users/statelesshz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It seems the failing test case is not related to these commits. Please re-trigger this failed workflow, Thanks.", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,694
1,689
CONTRIBUTOR
null
# What does this PR do? This PR removes some deprecated code: - remove `xpu_backend` training argument, for it is deprecated and will be remove in version 4.31 of Transformers. - remove some codes that will never be executed. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24837/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24837/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24837", "html_url": "https://github.com/huggingface/transformers/pull/24837", "diff_url": "https://github.com/huggingface/transformers/pull/24837.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24837.patch", "merged_at": 1689619560000 }
https://api.github.com/repos/huggingface/transformers/issues/24836
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24836/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24836/comments
https://api.github.com/repos/huggingface/transformers/issues/24836/events
https://github.com/huggingface/transformers/issues/24836
1,805,875,019
I_kwDOCUB6oc5ro3dL
24,836
Pipeline feature request for min_new_tokens
{ "login": "mediocreatmybest", "id": 80406625, "node_id": "MDQ6VXNlcjgwNDA2NjI1", "avatar_url": "https://avatars.githubusercontent.com/u/80406625?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mediocreatmybest", "html_url": "https://github.com/mediocreatmybest", "followers_url": "https://api.github.com/users/mediocreatmybest/followers", "following_url": "https://api.github.com/users/mediocreatmybest/following{/other_user}", "gists_url": "https://api.github.com/users/mediocreatmybest/gists{/gist_id}", "starred_url": "https://api.github.com/users/mediocreatmybest/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mediocreatmybest/subscriptions", "organizations_url": "https://api.github.com/users/mediocreatmybest/orgs", "repos_url": "https://api.github.com/users/mediocreatmybest/repos", "events_url": "https://api.github.com/users/mediocreatmybest/events{/privacy}", "received_events_url": "https://api.github.com/users/mediocreatmybest/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
open
false
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[ { "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false } ]
[ "cc @gante ", "Hey @mediocreatmybest -- the flag is already available and operational :) Be mindful that you may need to adjust `max_new_tokens` to an admissible range. An impossible combination of constraints should raise a warning -- we are working on them :)\r\n\r\n```py\r\nfrom transformers import pipeline\r\npipe = pipeline(task=\"text-generation\", model=\"gpt2\")\r\n\r\n# Base example\r\npipe_out = pipe(\"This is a sequence of numbers: 1 2 3 4\", do_sample=False)\r\nprint(pipe_out)\r\n\r\n# Add min_new_tokens -> no change because the default maximum number of tokens is smaller\r\n# This will likely be an exception in the future.\r\npipe_out = pipe(\"This is a sequence of numbers: 1 2 3 4\", do_sample=False, min_new_tokens=100)\r\nprint(pipe_out)\r\n\r\n# If we add max_new_tokens -> it works as expected\r\npipe_out = pipe(\"This is a sequence of numbers: 1 2 3 4\", do_sample=False, min_new_tokens=100, max_new_tokens=100)\r\nprint(pipe_out)\r\n```", "(I'm closing since the feature already exists -- feel free to continue commenting)", "Awesome thanks! ", "> (I'm closing since the feature already exists -- feel free to continue commenting)\r\n\r\nThanks @gante, just to confirm which version of transformers has this been included in?\r\nCurrently transformers 4.31.0 with pipeline task text-to-image produces this error when running with min_new_tokens \r\n\r\nSimple Caption Pipeline:\r\n```\r\n\r\nfrom transformers import pipeline\r\n\r\npipe = pipeline(\"image-to-text\",model=\"Salesforce/blip-image-captioning-base\",min_new_tokens=5, max_new_tokens=20)\r\ncaption = pipe(\"https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png\")\r\nprint(caption)\r\n```\r\n\r\n\r\n```\r\n pipe = pipeline(\"image-to-text\",model=\"Salesforce/blip-image-captioning-base\",min_new_tokens=5, max_new_tokens=20)\r\n File \"C:\\Python310\\lib\\site-packages\\transformers\\pipelines\\__init__.py\", line 988, in pipeline\r\n return pipeline_class(model=model, framework=framework, task=task, **kwargs)\r\n File \"C:\\Python310\\lib\\site-packages\\transformers\\pipelines\\image_to_text.py\", line 55, in __init__\r\n super().__init__(*args, **kwargs)\r\n File \"C:\\Python310\\lib\\site-packages\\transformers\\pipelines\\base.py\", line 816, in __init__\r\n self._preprocess_params, self._forward_params, self._postprocess_params = self._sanitize_parameters(**kwargs)\r\nTypeError: ImageToTextPipeline._sanitize_parameters() got an unexpected keyword argument 'min_new_tokens'\r\n\r\n```", "Just ran through your example and that ran without an issue, I'm guessing my issue might be the ImagetoTextPipeline doesn't have that feature? ", "@mediocreatmybest quite possibly `ImagetoTextPipeline` is not correctly accepting text generation arguments -- will check it!", "@mediocreatmybest #24989 fixes it :) (feel free to pip install from that PR)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Commenting so this doesnโ€™t close automatically.\r\n\r\nAs mentioned in the other thread, is this still possible? or what would be the recommended way to pass this to pipeline for the mentioned tasks?\r\n\r\nThanks. :)", "Yes, it is still possible, but the fix may take a few weeks (it is a systemic change across pipelines, see [this comment](https://github.com/huggingface/transformers/pull/24989#issuecomment-1647767932)) :)", "> Yes, it is still possible, but the fix may take a few weeks (it is a systemic change across pipelines, see [this comment](https://github.com/huggingface/transformers/pull/24989#issuecomment-1647767932)) :)\n\nAwesome I'll keep an eye out and update my HF space after then. Thanks!\n\nI'm happy to close this request if it looks to be in the \"pipeline\". :) ", "(Let's keep it open, so we have a reminder to complete it!)", "Bump! ", "(assigned to me and added WIP so this doesn't get closed)" ]
1,689
1,697
null
NONE
null
### Feature request Pipeline already supports the option max_new_tokens. Iโ€™m requesting for the existing โ€œmin_new_tokensโ€ to be able to be used with pipeline the same way as โ€œmax_new_tokensโ€. Currently this will throw an error when trying to specify โ€œmin_new_tokensโ€ as unrecognised. ### Motivation Maintaining consistency, as the previous way to specify tokens is deprecated. ### Your contribution I can test, but Iโ€™m not a developer. So no, I couldnโ€™t do a PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24836/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24836/timeline
reopened
null
null
https://api.github.com/repos/huggingface/transformers/issues/24835
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24835/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24835/comments
https://api.github.com/repos/huggingface/transformers/issues/24835/events
https://github.com/huggingface/transformers/issues/24835
1,805,836,050
I_kwDOCUB6oc5rot8S
24,835
Reflexion Agent implementation?
{ "login": "ghosthamlet", "id": 758325, "node_id": "MDQ6VXNlcjc1ODMyNQ==", "avatar_url": "https://avatars.githubusercontent.com/u/758325?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghosthamlet", "html_url": "https://github.com/ghosthamlet", "followers_url": "https://api.github.com/users/ghosthamlet/followers", "following_url": "https://api.github.com/users/ghosthamlet/following{/other_user}", "gists_url": "https://api.github.com/users/ghosthamlet/gists{/gist_id}", "starred_url": "https://api.github.com/users/ghosthamlet/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghosthamlet/subscriptions", "organizations_url": "https://api.github.com/users/ghosthamlet/orgs", "repos_url": "https://api.github.com/users/ghosthamlet/repos", "events_url": "https://api.github.com/users/ghosthamlet/events{/privacy}", "received_events_url": "https://api.github.com/users/ghosthamlet/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,689
1,692
1,692
NONE
null
### Feature request Reflexion: Language Agents with Verbal Reinforcement Learning (https://arxiv.org/abs/2303.11366) has code in https://github.com/noahshinn024/reflexion, can you integrate it into transformers_agents? ### Motivation Reflexion Agent is a very interesting advanced agent, and already has its code open sourced, can it be easy to integrate it into transformers_agents? ### Your contribution Currently no.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24835/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24835/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24834
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24834/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24834/comments
https://api.github.com/repos/huggingface/transformers/issues/24834/events
https://github.com/huggingface/transformers/issues/24834
1,805,827,132
I_kwDOCUB6oc5rorw8
24,834
Pipeline image-to-text task and Bitsandbytes error
{ "login": "mediocreatmybest", "id": 80406625, "node_id": "MDQ6VXNlcjgwNDA2NjI1", "avatar_url": "https://avatars.githubusercontent.com/u/80406625?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mediocreatmybest", "html_url": "https://github.com/mediocreatmybest", "followers_url": "https://api.github.com/users/mediocreatmybest/followers", "following_url": "https://api.github.com/users/mediocreatmybest/following{/other_user}", "gists_url": "https://api.github.com/users/mediocreatmybest/gists{/gist_id}", "starred_url": "https://api.github.com/users/mediocreatmybest/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mediocreatmybest/subscriptions", "organizations_url": "https://api.github.com/users/mediocreatmybest/orgs", "repos_url": "https://api.github.com/users/mediocreatmybest/repos", "events_url": "https://api.github.com/users/mediocreatmybest/events{/privacy}", "received_events_url": "https://api.github.com/users/mediocreatmybest/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Based on this document, it should be possible, but maybe this is just an issue with multimodal or image processors with pipeline?\n\nhttps://huggingface.co/docs/transformers/main/pipeline_tutorial\n\n_# pip install accelerate bitsandbytes\nimport torch\nfrom transformers import pipeline\n\npipe = pipeline(model=\"facebook/opt-1.3b\", device_map=\"auto\", model_kwargs={\"load_in_8bit\": True})\noutput = pipe(\"This is a cool example!\", do_sample=True, top_p=0.95)_", "Also I did create a huggingface.co spaces using pipeline with the ability to try load in 8bit (obviously errors)\n\nhttps://huggingface.co/spaces/Mediocreatmybest/PipelineImageCaption\n\nThanks. ", "Adding the stack trace from google colab.\n\n---------------------------------------------------------------------------\nRuntimeError Traceback (most recent call last)\n<ipython-input-4-ca06d49da534> in <cell line: 19>()\n 17 captioner\n 18 # caption\n---> 19 caption = captioner(image)[0]['generated_text']\n 20 print(caption)\n\n16 frames\n/usr/local/lib/python3.10/dist-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight, bias)\n 457 weight, bias, self.stride,\n 458 _pair(0), self.dilation, self.groups)\n--> 459 return F.conv2d(input, weight, bias, self.stride,\n 460 self.padding, self.dilation, self.groups)\n 461 \n\nRuntimeError: Input type (float) and bias type (c10::Half) should be the same", "cc @younesbelkada ", "Hi @mediocreatmybest \r\nThanks for the issue, it seems the input image needs to be converted into half-precision (`torch.float16`), can you share a small handy reproducible snippet that leads to your bug? ", "Thanks for the fast response!\r\n\r\nThe snippet I was using to test on google colab and on my personal device was:\r\n\r\n```\r\nfrom transformers import pipeline\r\nimport torch\r\n\r\n\r\nimage = \"https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png\"\r\nmodel = \"Salesforce/blip-image-captioning-base\"\r\n\r\nmodel_kwargs = {\"load_in_8bit\": True, \"torch_dtype\": torch.float16}\r\ncaptioner = pipeline(task=\"image-to-text\",\r\nmodel=model,\r\nmax_new_tokens=30,\r\nmodel_kwargs=model_kwargs, use_fast=True\r\n)\r\n# load model\r\ncaptioner\r\n# caption\r\ncaption = captioner(image)[0]['generated_text']\r\nprint(caption)\r\n\r\n```\r\n\r\n(Copy and pasted from my mobile device, hopefully this formatted correctly)\r\n\r\nThanks ๐Ÿ™ ", "I encountered similar errors while using Blip/Blip2/Git models in an image_to_text pipeline. In my case, I was working with float16 instead of 8bit precision, as under my setup I was encountering additional issues with 8bit. I think there's a very good chance that the fix I've made in #24947 might also fix your issue (for the three models I've implemented the fix for). If you're able to give it a try I'd be interested in hearing if it fixes your issue too.", "> I encountered similar errors while using Blip/Blip2/Git models in an image_to_text pipeline. In my case, I was working with float16 instead of 8bit precision, as under my setup I was encountering additional issues with 8bit. I think there's a very good chance that the fix I've made in #24947 might also fix your issue (for the three models I've implemented the fix for). If you're able to give it a try I'd be interested in hearing if it fixes your issue too.\r\n\r\nThanks @JimAllanson, happy to try test, but I'm pretty new to Python, what is the best way to test this for you? editing the site-packages with the change? " ]
1,689
1,689
1,689
NONE
null
### System Info Python 3.10.6 Transformers 4.30.0 Bitsandbytes 0.39.1 Windows / Linux ### Who can help? @nar ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Using an 4 or 8bit quantised model such as: https://huggingface.co/Mediocreatmybest/blip2-opt-2.7b_8bit ### Expected behavior The Pipeline image processor to detect the model is running with a 4 or 8bit model with bitsandbytes. I apologise if this should be a feature request or if itโ€™s a bug, I couldnโ€™t find any examples of what I was trying to do. When running through the pipeline examples from the hugging face website, if I try using an 8bit model, the model seems to be detected correctly and casts it to 8bit, but the Processor doesnโ€™t seem to follow suit and runs at its default, throwing an error that they both should be set at the same floating point. Iโ€™ve uploaded a few models set at 8bit to save on size and memory, as BLIP2 is pretty heavy, using it on consumer devices is oviously challenging. The models Iโ€™ve uploaded to HuggingFace are: Mediocreatmybest/blip2-opt-2.7b_8bit Mediocreatmybest/blip2-opt-6.7b_8bit Mediocreatmybest/blip2-flan-t5-xxl_8bit I can get them working with regular methods, but as Iโ€™m a beginner itโ€™s obviously challenging. Thanks again for all the great work!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24834/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24834/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24833
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24833/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24833/comments
https://api.github.com/repos/huggingface/transformers/issues/24833/events
https://github.com/huggingface/transformers/pull/24833
1,805,790,014
PR_kwDOCUB6oc5VkcQm
24,833
Bump cryptography from 41.0.0 to 41.0.2 in /examples/research_projects/decision_transformer
{ "login": "dependabot[bot]", "id": 49699333, "node_id": "MDM6Qm90NDk2OTkzMzM=", "avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dependabot%5Bbot%5D", "html_url": "https://github.com/apps/dependabot", "followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers", "following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}", "gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}", "starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions", "organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs", "repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos", "events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}", "received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events", "type": "Bot", "site_admin": false }
[ { "id": 1905493434, "node_id": "MDU6TGFiZWwxOTA1NDkzNDM0", "url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies", "name": "dependencies", "color": "0366d6", "default": false, "description": "Pull requests that update a dependency file" } ]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
CONTRIBUTOR
null
Bumps [cryptography](https://github.com/pyca/cryptography) from 41.0.0 to 41.0.2. <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst">cryptography's changelog</a>.</em></p> <blockquote> <p>41.0.2 - 2023-07-10</p> <pre><code> * Fixed bugs in creating and parsing SSH certificates where critical options with values were handled incorrectly. Certificates are now created correctly and parsing accepts correct values as well as the previously generated invalid forms with a warning. In the next release, support for parsing these invalid forms will be removed. <p>.. _v41-0-1:</p> <p>41.0.1 - 2023-06-01 </code></pre></p> <ul> <li>Temporarily allow invalid ECDSA signature algorithm parameters in X.509 certificates, which are generated by older versions of Java.</li> <li>Allow null bytes in pass phrases when serializing private keys.</li> </ul> <p>.. _v41-0-0:</p> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/pyca/cryptography/commit/7431db737cf0407560fac689d24f1d2e5efc349d"><code>7431db7</code></a> bump for 41.0.2 (<a href="https://redirect.github.com/pyca/cryptography/issues/9215">#9215</a>)</li> <li><a href="https://github.com/pyca/cryptography/commit/e190ef190525999d1f599cf8c3aef5cb7f3a8bc4"><code>e190ef1</code></a> Backport ssh cert fix (<a href="https://redirect.github.com/pyca/cryptography/issues/9211">#9211</a>)</li> <li><a href="https://github.com/pyca/cryptography/commit/bb204c8ca7bc0df0c24b6f6c1f59ed5f5bee9226"><code>bb204c8</code></a> Backport: Added PyPy 3.10 to CI (<a href="https://redirect.github.com/pyca/cryptography/issues/8933">#8933</a>) (<a href="https://redirect.github.com/pyca/cryptography/issues/9210">#9210</a>)</li> <li><a href="https://github.com/pyca/cryptography/commit/d02de9f26e9a2353e89427c1cea8b9ed2bae969e"><code>d02de9f</code></a> changelog and version bump (<a href="https://redirect.github.com/pyca/cryptography/issues/9008">#9008</a>)</li> <li><a href="https://github.com/pyca/cryptography/commit/53dc686431f59658d892b83383a330d796105843"><code>53dc686</code></a> Backport null fix (<a href="https://redirect.github.com/pyca/cryptography/issues/9007">#9007</a>)</li> <li><a href="https://github.com/pyca/cryptography/commit/b99900596e65f31543d62cf1a52069c709ba7970"><code>b999005</code></a> Backport tolerate (<a href="https://redirect.github.com/pyca/cryptography/issues/9006">#9006</a>)</li> <li>See full diff in <a href="https://github.com/pyca/cryptography/compare/41.0.0...41.0.2">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=cryptography&package-manager=pip&previous-version=41.0.0&new-version=41.0.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24833/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24833/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24833", "html_url": "https://github.com/huggingface/transformers/pull/24833", "diff_url": "https://github.com/huggingface/transformers/pull/24833.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24833.patch", "merged_at": 1689592637000 }
https://api.github.com/repos/huggingface/transformers/issues/24832
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24832/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24832/comments
https://api.github.com/repos/huggingface/transformers/issues/24832/events
https://github.com/huggingface/transformers/issues/24832
1,805,738,711
I_kwDOCUB6oc5roWLX
24,832
`trainer.evaluate` does throws an error when using multiple evaluation dataset
{ "login": "hwijeen", "id": 29157715, "node_id": "MDQ6VXNlcjI5MTU3NzE1", "avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hwijeen", "html_url": "https://github.com/hwijeen", "followers_url": "https://api.github.com/users/hwijeen/followers", "following_url": "https://api.github.com/users/hwijeen/following{/other_user}", "gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}", "starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions", "organizations_url": "https://api.github.com/users/hwijeen/orgs", "repos_url": "https://api.github.com/users/hwijeen/repos", "events_url": "https://api.github.com/users/hwijeen/events{/privacy}", "received_events_url": "https://api.github.com/users/hwijeen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Yes, please do suggest a PR to fix this, thanks!" ]
1,689
1,689
1,689
CONTRIBUTOR
null
### System Info ``` - `transformers` version: 4.31.0.dev0 - Platform: macOS-12.5.1-arm64-arm-64bit - Python version: 3.8.16 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ``` ### Who can help? @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. use transformers [examples code](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization) on summarization (or any other) 2. Pass multiple evaluation dataset as follows when running the code. This should be supported as [documented here](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.Trainer.eval_dataset). ``` python examples/pytorch/summarization/run_summarization.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --train_file "a.txt" "b.txt" \ --validation_file "a_valid.txt" "b_valid.txt" ``` ### Expected behavior The example code should not meet an error when `trainer.evaluate` is called, which is either 1) inteยทmittently during training and 2) [at the end of the training](https://github.com/huggingface/transformers/blob/5bb4430edc7df9f9950d412d98bbe505cc4d328b/examples/pytorch/summarization/run_summarization.py#L695). During the training, Trainer [checks if the the passed `eva_dataset` consists of multiple or not](https://github.com/huggingface/transformers/blob/5bb4430edc7df9f9950d412d98bbe505cc4d328b/src/transformers/trainer.py#L2216). Since this line is missing in the 2) at the end of training evaluation case, this meets an error. I'd be happy to make a PR on this :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24832/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24832/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24831
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24831/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24831/comments
https://api.github.com/repos/huggingface/transformers/issues/24831/events
https://github.com/huggingface/transformers/issues/24831
1,805,464,748
I_kwDOCUB6oc5rnTSs
24,831
RwkvForCausalLM does not support gradient checkpointing.
{ "login": "jonataslaw", "id": 35742643, "node_id": "MDQ6VXNlcjM1NzQyNjQz", "avatar_url": "https://avatars.githubusercontent.com/u/35742643?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jonataslaw", "html_url": "https://github.com/jonataslaw", "followers_url": "https://api.github.com/users/jonataslaw/followers", "following_url": "https://api.github.com/users/jonataslaw/following{/other_user}", "gists_url": "https://api.github.com/users/jonataslaw/gists{/gist_id}", "starred_url": "https://api.github.com/users/jonataslaw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jonataslaw/subscriptions", "organizations_url": "https://api.github.com/users/jonataslaw/orgs", "repos_url": "https://api.github.com/users/jonataslaw/repos", "events_url": "https://api.github.com/users/jonataslaw/events{/privacy}", "received_events_url": "https://api.github.com/users/jonataslaw/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[ { "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false } ]
[ "Thanks for reporting @jonataslaw \r\nhttps://github.com/huggingface/transformers/pull/24955 has introduced the GC support for RWKV models, can you try that out by installing `transformers` from source and let us know how it goes?\r\n\r\n```bash\r\npip uninstall transformers\r\npip install git+https://github.com/huggingface/transformers.git\r\n```", "Thanks for the quick update!\r\nI tested your PR, and it works like a charm.\r\nHowever it stops with an error about 10% of training (during eval):\r\n\r\n File \"/home/jonataslaw/miniconda3/lib/python3.9/site-packages/accelerate/hooks.py\", line 165, in new_forward\r\n output = old_forward(*args, **kwargs)\r\n File \"/home/jonataslaw/miniconda3/lib/python3.9/site-packages/transformers/models/rwkv/modeling_rwkv.py\", line 645, in forward\r\n self._rescale_layers()\r\n File \"/home/jonataslaw/miniconda3/lib/python3.9/site-packages/transformers/models/rwkv/modeling_rwkv.py\", line 738, in _rescale_layers\r\n block.attention.output.weight.quant_state[0].div_(\r\nRuntimeError: result type Float can't be cast to the desired output type Byte\r\n\r\n\r\nEdit: NVM, it is a unrelated problem about inference.", "Ohh, I get it, it is the nested quantization problem related in https://github.com/huggingface/transformers/issues/23848", "Yes, sadly nested quantization is not supported for RWKV, please use the un-nested one ! ", "Thanks for the update.\r\nI will change my code and stay alert in case it changes in the future.\r\n\r\nThanks again for fixing the GC, it helps a lot.", "Thanks very much @jonataslaw !" ]
1,689
1,689
1,689
NONE
null
### System Info Is there some reason for RwkvForCausalLM does not support gradient checkpointing, since RWKV-LM supports it? @ArthurZucker and @younesbelkada ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction `model.gradient_checkpointing_enable()` ``` ValueError(f"{self.__class__.__name__} does not support gradient checkpointing.") ValueError: RwkvForCausalLM does not support gradient checkpointing. ``` ### Expected behavior No errors, as long as RWKV-LM supports it.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24831/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24831/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24830
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24830/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24830/comments
https://api.github.com/repos/huggingface/transformers/issues/24830/events
https://github.com/huggingface/transformers/issues/24830
1,805,456,754
I_kwDOCUB6oc5rnRVy
24,830
Overlapped offset mapping when manually adding special tokens
{ "login": "bilelomrani1", "id": 16692099, "node_id": "MDQ6VXNlcjE2NjkyMDk5", "avatar_url": "https://avatars.githubusercontent.com/u/16692099?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bilelomrani1", "html_url": "https://github.com/bilelomrani1", "followers_url": "https://api.github.com/users/bilelomrani1/followers", "following_url": "https://api.github.com/users/bilelomrani1/following{/other_user}", "gists_url": "https://api.github.com/users/bilelomrani1/gists{/gist_id}", "starred_url": "https://api.github.com/users/bilelomrani1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bilelomrani1/subscriptions", "organizations_url": "https://api.github.com/users/bilelomrani1/orgs", "repos_url": "https://api.github.com/users/bilelomrani1/repos", "events_url": "https://api.github.com/users/bilelomrani1/events{/privacy}", "received_events_url": "https://api.github.com/users/bilelomrani1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! \r\nThe `legacy=False` option is pretty much useless for fast tokenizers, so just a FYI here! (the warning is triggered because a slow instance is initialised!) \r\nThe `31822` corresponds to `SPIECE_UNDERLINE = \"โ–\"`. \r\nThe tokenizer on the hub seems to have a few issues already:\r\n```python \r\n>>> tok = AutoTokenizer.from_pretrained(\"openlm-research/open_llama_7b\", legacy=False, use_fast = False)\r\n>>> tok.encode(\" \")\r\n[1]\r\n>>> tok = AutoTokenizer.from_pretrained(\"openlm-research/open_llama_7b\", legacy=False, use_fast = True)\r\n>>> tok.encode(\" \")\r\n[1, 31822, ..........., 31822]\r\n```\r\nThen, there are no `tokenizer.json` files there, which suggest they are using the default Llama converted. But this might not be intended. As you can see, the `huggyllama/llama-7b` is working as expected. I suggest you open an issue on [the original repo](https://huggingface.co/openlm-research/open_llama_7b/discussions)! \r\nI am not familiar with this model but might have been wrongly converted", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,689
1,693
1,693
NONE
null
### System Info - `transformers` version: 4.31.0.dev0 (commit 21946a8cf4a273f35ac2f3a53edafc398699f527) - Platform: macOS-13.2.1-x86_64-i386-64bit - Python version: 3.9.17 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I need to manually insert special tokens in a string so I'm using `add_special_tokens=False` and I encounter decoding inconsistencies between tokenizers that are not present with `add_special_tokens=True`. Here are the test cases with the `openlm-research/open_llama_7b` tokenizer. ```python >>> tok = AutoTokenizer.from_pretrained("openlm-research/open_llama_7b", legacy=False) >>> encodings = tok("<s> SYSTEM", add_special_tokens=False, return_offsets_mapping=True) >>> encodings["input_ids"] [1, 31822, 18469, 29767] >>> encodings["offset_mapping"] [(0, 3), (3, 4), (3, 6), (6, 10)] >>> tok.decode(encodings["input_ids"]) '<s> SYSTEM' ``` In the offset mapping, the 2nd and 3rd token are overlapping which is unexpected, and the decoded sequence does not give back the original string, but adds an additional whitespace after the BOS token. When letting the tokenizer handle the special tokens by itself (`add_special_tokens=True`), the issue is not present. ```python >>> tok = AutoTokenizer.from_pretrained("openlm-research/open_llama_7b", legacy=False) >>> encodings = tok("SYSTEM", add_special_tokens=True, return_offsets_mapping=True) >>> encodings["input_ids"] [1, 18469, 29767] >>> encodings["offset_mapping"] [(0, 0), (0, 2), (2, 6)] >>> tok.decode(encodings["input_ids"]) '<s> SYSTEM' ``` ### Expected behavior Here is the test case with the LLaMA tokenizer, which works as expected, even when manually handling special tokens. ```python >>> tok = AutoTokenizer.from_pretrained("huggyllama/llama-7b", legacy=False) >>> encodings = tok("<s> SYSTEM", add_special_tokens=False, return_offsets_mapping=True) >>> encodings["input_ids"] [1, 28962, 1254, 12665] >>> encodings["offset_mapping"] [(0, 3), (3, 6), (6, 8), (8, 10)] >>> tok.decode(encodings["input_ids"]) '<s> SYSTEM' ``` The offsets are non-overlapping, and the decoding gives back the original string without additional whitespaces.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24830/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24830/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24829
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24829/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24829/comments
https://api.github.com/repos/huggingface/transformers/issues/24829/events
https://github.com/huggingface/transformers/pull/24829
1,805,372,220
PR_kwDOCUB6oc5Vi-go
24,829
[WIP] Add state in segments id calculation
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,690
1,690
COLLABORATOR
null
# What does this PR do? At the moment, when segments are calculated it's calculated on an image-per-image basis. This means when predicting with certain models e.g. DETR, the segment id that each class corresponds can be different across each image in a batch and across batches. This PR adds a private attribute to the image processor class to store the class: to segment_id mapping as state. /!\ There is a possible breaking change, as `compute_segments` now returns 3 rather than two objects. Fixes #23461 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24829/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24829/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24829", "html_url": "https://github.com/huggingface/transformers/pull/24829", "diff_url": "https://github.com/huggingface/transformers/pull/24829.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24829.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24828
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24828/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24828/comments
https://api.github.com/repos/huggingface/transformers/issues/24828/events
https://github.com/huggingface/transformers/pull/24828
1,805,321,357
PR_kwDOCUB6oc5VizNn
24,828
๐ŸŒ[i18n-KO] Translated pipeline_webserver.md to Korean
{ "login": "kihoon71", "id": 75935546, "node_id": "MDQ6VXNlcjc1OTM1NTQ2", "avatar_url": "https://avatars.githubusercontent.com/u/75935546?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kihoon71", "html_url": "https://github.com/kihoon71", "followers_url": "https://api.github.com/users/kihoon71/followers", "following_url": "https://api.github.com/users/kihoon71/following{/other_user}", "gists_url": "https://api.github.com/users/kihoon71/gists{/gist_id}", "starred_url": "https://api.github.com/users/kihoon71/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kihoon71/subscriptions", "organizations_url": "https://api.github.com/users/kihoon71/orgs", "repos_url": "https://api.github.com/users/kihoon71/repos", "events_url": "https://api.github.com/users/kihoon71/events{/privacy}", "received_events_url": "https://api.github.com/users/kihoon71/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "๋ณ„๋„์˜ ๋ฆฌ๋ทฐ ์‚ฌํ•ญ ์—†์Šต๋‹ˆ๋‹ค. ์ˆ˜๊ณ ํ•˜์…จ์Šต๋‹ˆ๋‹ค :)" ]
1,689
1,690
1,690
CONTRIBUTOR
null
<!-- PR์˜ ์ œ๋ชฉ์€ "๐ŸŒ [i18n-KO] Translated `<your_file>.md` to Korean" ์œผ๋กœ ๋ถ€ํƒ๋“œ๋ฆฝ๋‹ˆ๋‹ค! --> # What does this PR do? Translated the `pipeline_webserver.md` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 ## Before reviewing - [x] Check for missing / redundant translations (๋ฒˆ์—ญ ๋ˆ„๋ฝ/์ค‘๋ณต ๊ฒ€์‚ฌ) - [x] Grammar Check (๋งž์ถค๋ฒ• ๊ฒ€์‚ฌ) - [x] Review or Add new terms to glossary (์šฉ์–ด ํ™•์ธ ๋ฐ ์ถ”๊ฐ€) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview๋กœ ์ •์ƒ์ž‘๋™ ํ™•์ธ) ## Who can review? (Initial) @0525hhgus, @Sunmin0520, @54data, @seank021, @augustinLib <!-- 1. ์œ„ ์ฒดํฌ๊ฐ€ ๋ชจ๋‘ ์™„๋ฃŒ๋œ ๋’ค์—, ์ด ์•„๋ž˜์— ๋ฆฌ๋ทฐ๋ฅผ ์š”์ฒญํ•  ํŒ€์›๋“ค์„ ๋ฉ˜์…˜ํ•ด์ฃผ์„ธ์š”! --> <!-- May you please review this PR? @member1 @member2 ... --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) May you please review this PR? @sgugger, @ArthurZucker, @eunseojo <!-- 2. ํŒ€์›๋“ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ€ ๋๋‚œ ํ›„์—๋งŒ ํ—ˆ๊น…ํŽ˜์ด์Šค ์ง์›๋“ค์—๊ฒŒ ๋ฆฌ๋ทฐ ์š”์ฒญํ•˜๋Š” ์•„๋ž˜ ์ฃผ์„์„ ๋…ธ์ถœํ•ด์ฃผ์„ธ์š”! --> <!-- May you please review this PR? @sgugger, @ArthurZucker, @eunseojo --> <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24828/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24828/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24828", "html_url": "https://github.com/huggingface/transformers/pull/24828", "diff_url": "https://github.com/huggingface/transformers/pull/24828.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24828.patch", "merged_at": 1690375238000 }
https://api.github.com/repos/huggingface/transformers/issues/24827
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24827/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24827/comments
https://api.github.com/repos/huggingface/transformers/issues/24827/events
https://github.com/huggingface/transformers/pull/24827
1,805,198,388
PR_kwDOCUB6oc5ViXkD
24,827
[`core`]ย PEFT integration
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I would like to have first review of the draft if possible, to see if we are inline with the approach ๐Ÿ™ @sgugger - Thanks !", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24827). All of your documentation changes will be reflected on that endpoint.", "Thank you for your review @pacman100 ! \r\nI think the canonical way to load PEFT models for inference would still be to use PEFT classes (i.e. either `AutoPeftModelForCausalLM` or `PeftModel` - btw we should encourage users to use more and more `AutoPeftModelForCausalLM` instead of `PeftModel`) \r\nThis PR is intended to make things even easier for users and for further integrations with HF ecosystem (`pipeline`, `diffusers`) and it will be clearly documented. I also think we should update the inference widgets after the PEFT release.", "> Mmm that's not the correct way to add a new peft job as it will always run on any PR, even if it shouldn't be run.\r\n\r\n@younesbelkada \r\n\r\nYou can take a look \r\n\r\nhttps://github.com/huggingface/transformers/blob/476be08c4aa96f8c1cae4200d2677bbe8f12cf80/utils/tests_fetcher.py#L720\r\n\r\n(check `examples_test_list.txt` and `examples_tests_to_run` in the same file and the 2 CircleCI config files)\r\n(you have to check against `peft_integration` in your case)", "The design as is would make it hard for `diffusers` to leverage `transformes` to load PEFT weights for transformers models. \r\nIn `diffusers` we have the following workflow:\r\n\r\n```py\r\nfrom diffusers import DiffusionPipeline\r\n\r\npipe = DiffusionPipeline.from_pretrained(\"runwayml/stable-diffusion-v1-5\")\r\n# now we have a loaded CLIPTextModel under `pipe.text_encoder`\r\n\r\npipe.load_lora(...)\r\n# doing this means we would want to call `text_encoder.load_adapter(...)` under the hood\r\n```\r\n\r\nThis means we necessarily need `transformers` to support the ability to load lora weights into already instantiated models, just like MMS allows it via `load_adapter` - see: https://huggingface.co/docs/transformers/v4.31.0/en/model_doc/wav2vec2#transformers.Wav2Vec2ForCTC.load_adapter.example\r\n\r\n[Edit] We could come around it, by wrapping `pipe.text_encoder` into a `PeftModel` under the hood when doing `pipe.load_lora(...)` to transform `CLIPTextModel` into `PeftModel` but that:\r\n- would break some internal `diffusers` code \r\n- force us to wrap logic around `transformers`, e.g. we could just do `pipe.text_encoder.load_adapter(...)`, but would have to first wrap every transformers model into a PEFT model", "From purely a `transformers` point of view, I would also struggle a bit with the following:\r\n\r\n1.) PEFT weights seemingly should only be loaded with `AutoModel`, which is restrictive as there is no need to go over the AutoModel class if one knows the model.\r\n\r\nIt does look like the following would be possible:\r\n\r\n```py\r\nfrom transformers import LlamaForCausalLM\r\n\r\nmodel = LlamaForCausalLM.from_pretrained(\"tloen/alpaca-lora-7b\")\r\n```\r\n\r\nbut then `model.__class__` is of type `PeftModel` which would be confusing to me - I used a class method of `LlamaForCausalLM`\r\n\r\n2.) I don't like that `.from_pretrained(<peft/model/id>)` more or less fully dispatches to the `peft` library instead of staying in Transformers' land. I imagined `peft` to be used as a utility library, not Transformers to dispatch to peft.\r\n\r\n\r\n=> Could we not create a `PeftModelMixin` class so that `peft` operates more under the hood?", "Closing as #25077 got merged" ]
1,689
1,692
1,692
CONTRIBUTOR
null
# What does this PR do? This PR is an attempt to tightly integrate PEFT library with transformers, by offering users the ability to load PEFT models out of the box from `AutoModelForxxx.from_pretrained()` if the local directory or the Hub model id contains adapter weights and adapter config. ```python import tempfile from transformers import AutoModelForCausalLM, AutoTokenizer from peft import AutoPeftModelForCausalLM peft_model_id = "ybelkada/opt-350m-lora" model = AutoModelForCausalLM.from_pretrained(peft_model_id) with tempfile.TemporaryDirectory() as tmpdirname: peft_model = AutoPeftModelForCausalLM.from_pretrained(peft_model_id) peft_model.save_pretrained(tmpdirname) model = AutoModelForCausalLM.from_pretrained(tmpdirname) print(model) ``` Although this is similar to what have been introduced in https://github.com/huggingface/peft/pull/694 this PR offers a direct integration with transformers ## TODOs: - [x] ย handle `PeftModel.from_pretrained(xxx)` kwargs - [x] tests - [x] docs (with the help of @stevhliu ) cc @sgugger @pacman100 @BenjaminBossan
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24827/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24827/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24827", "html_url": "https://github.com/huggingface/transformers/pull/24827", "diff_url": "https://github.com/huggingface/transformers/pull/24827.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24827.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24826
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24826/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24826/comments
https://api.github.com/repos/huggingface/transformers/issues/24826/events
https://github.com/huggingface/transformers/pull/24826
1,805,183,888
PR_kwDOCUB6oc5ViURW
24,826
Remove unused code in GPT-Neo
{ "login": "namespace-Pt", "id": 61188463, "node_id": "MDQ6VXNlcjYxMTg4NDYz", "avatar_url": "https://avatars.githubusercontent.com/u/61188463?v=4", "gravatar_id": "", "url": "https://api.github.com/users/namespace-Pt", "html_url": "https://github.com/namespace-Pt", "followers_url": "https://api.github.com/users/namespace-Pt/followers", "following_url": "https://api.github.com/users/namespace-Pt/following{/other_user}", "gists_url": "https://api.github.com/users/namespace-Pt/gists{/gist_id}", "starred_url": "https://api.github.com/users/namespace-Pt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/namespace-Pt/subscriptions", "organizations_url": "https://api.github.com/users/namespace-Pt/orgs", "repos_url": "https://api.github.com/users/namespace-Pt/repos", "events_url": "https://api.github.com/users/namespace-Pt/events{/privacy}", "received_events_url": "https://api.github.com/users/namespace-Pt/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@sgugger context: this change was discussed in https://github.com/huggingface/transformers/issues/24820 -- this model is the only one that deletes `position_ids` in this function, and there is no apparent reason for it. This is a mostly unused code path and has been part of the original GPT-Neo commit.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24826). All of your documentation changes will be reflected on that endpoint." ]
1,689
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fix [#24820](https://github.com/huggingface/transformers/issues/24820#issuecomment-1635744675) by removing `else` statement in `GPTNeoForCausalLM`. ## Who can review? @gante Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24826/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24826/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24826", "html_url": "https://github.com/huggingface/transformers/pull/24826", "diff_url": "https://github.com/huggingface/transformers/pull/24826.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24826.patch", "merged_at": 1689592068000 }
https://api.github.com/repos/huggingface/transformers/issues/24825
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24825/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24825/comments
https://api.github.com/repos/huggingface/transformers/issues/24825/events
https://github.com/huggingface/transformers/pull/24825
1,804,917,907
PR_kwDOCUB6oc5VhaTR
24,825
deprecate `sharded_ddp` training argument
{ "login": "statelesshz", "id": 28150734, "node_id": "MDQ6VXNlcjI4MTUwNzM0", "avatar_url": "https://avatars.githubusercontent.com/u/28150734?v=4", "gravatar_id": "", "url": "https://api.github.com/users/statelesshz", "html_url": "https://github.com/statelesshz", "followers_url": "https://api.github.com/users/statelesshz/followers", "following_url": "https://api.github.com/users/statelesshz/following{/other_user}", "gists_url": "https://api.github.com/users/statelesshz/gists{/gist_id}", "starred_url": "https://api.github.com/users/statelesshz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/statelesshz/subscriptions", "organizations_url": "https://api.github.com/users/statelesshz/orgs", "repos_url": "https://api.github.com/users/statelesshz/repos", "events_url": "https://api.github.com/users/statelesshz/events{/privacy}", "received_events_url": "https://api.github.com/users/statelesshz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "> That makes sense thanks! Note that while removing it entirely from the doc is fine, we can't abruptly remove it from the library like this. We will need to properly deprecate it first and in two-tree minor versions we can fully remove it.\r\n\r\n@sgugger Thanks for pointing this out. I have rolled back the code changes and added a warning that `sharded_ddp` will be deprecated. Could you take a second look at this PR?\r\n", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24825). All of your documentation changes will be reflected on that endpoint." ]
1,689
1,696
1,689
CONTRIBUTOR
null
# What does this PR do? This PR deprecates the `sharded_ddp` training arguments, since ShardedDDP has been [upstreamded to PyTorch](https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/), so users can use the `fsdp` traning parameter instead. --- ~~According to fairscale([see](https://github.com/facebookresearch/fairscale)), PyTorch FSDP is the recommended method for scaling to large NN models. I think Sharded-DDP is dead and it's time to say goodbye to this library.~~ ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24825/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24825/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24825", "html_url": "https://github.com/huggingface/transformers/pull/24825", "diff_url": "https://github.com/huggingface/transformers/pull/24825.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24825.patch", "merged_at": 1689591463000 }
https://api.github.com/repos/huggingface/transformers/issues/24824
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24824/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24824/comments
https://api.github.com/repos/huggingface/transformers/issues/24824/events
https://github.com/huggingface/transformers/pull/24824
1,804,886,584
PR_kwDOCUB6oc5VhTdC
24,824
Check models used for common tests are small
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,689
1,689
1,689
COLLABORATOR
null
# What does this PR do? This is the first PR in a series that will aim at reducing the time spent on the tests (and the corresponding cost ๐Ÿ˜… ). A first analysis of the slowest tests show that the common tests sometimes use real-life models instead of tiny ones. This PR adds a check that the size of the model for common tests is not bigger than 1M parameters (which is a wide bar, BERT has a version with 55k parameters for its common tests for instance). To avoid making the PR too long, the new test is skipped in most of the failures, only the DETR variants, BridgeTower, Canine and CLAP are treated in this PR. For the DETR variants, a full pretrained ResNet-50 was used as the backbone which is replaced by a tiny random ResNet. The following models will be treated in follow-up PRs: * CTRL * CVT * DETA * DPT * DPT Hybrid * EfficientNet * Encodec * ESM * Flava * Git * GPTSan-Japanese * Graphormer * LayoutLM * LayoutLMv2 * LeViT * Mask2Former * Maskformer * MobileViT * MobileVit2 * OneFormer * Perceiver * SegFormer * SpeechT5 * SwiftFormer * TableTransformer * TimmBackbone * TVLT * UperNet * VideoMAE * ViT-MAE * ViViT
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24824/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24824/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24824", "html_url": "https://github.com/huggingface/transformers/pull/24824", "diff_url": "https://github.com/huggingface/transformers/pull/24824.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24824.patch", "merged_at": 1689360200000 }
https://api.github.com/repos/huggingface/transformers/issues/24823
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24823/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24823/comments
https://api.github.com/repos/huggingface/transformers/issues/24823/events
https://github.com/huggingface/transformers/pull/24823
1,804,816,078
PR_kwDOCUB6oc5VhD85
24,823
change nn.ReLU to torch.relu in ACT_FNS for OpenAI
{ "login": "nathan-chappell", "id": 36384302, "node_id": "MDQ6VXNlcjM2Mzg0MzAy", "avatar_url": "https://avatars.githubusercontent.com/u/36384302?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nathan-chappell", "html_url": "https://github.com/nathan-chappell", "followers_url": "https://api.github.com/users/nathan-chappell/followers", "following_url": "https://api.github.com/users/nathan-chappell/following{/other_user}", "gists_url": "https://api.github.com/users/nathan-chappell/gists{/gist_id}", "starred_url": "https://api.github.com/users/nathan-chappell/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nathan-chappell/subscriptions", "organizations_url": "https://api.github.com/users/nathan-chappell/orgs", "repos_url": "https://api.github.com/users/nathan-chappell/repos", "events_url": "https://api.github.com/users/nathan-chappell/events{/privacy}", "received_events_url": "https://api.github.com/users/nathan-chappell/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24823). All of your documentation changes will be reflected on that endpoint.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,689
1,692
1,692
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # 24821 ## Before submitting - [*] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [*] https://github.com/huggingface/transformers/issues/24821 ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ArthurZucker @younesbelkada <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 --> --- I ran the following test command after installing pytorch: ```bash python -m pytest -s -v ./tests/models/openai/test_modeling_openai.py # 68 passed, 38 skipped, 15 warnings in 39.26s ``` I figured that's good enough.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24823/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24823/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24823", "html_url": "https://github.com/huggingface/transformers/pull/24823", "diff_url": "https://github.com/huggingface/transformers/pull/24823.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24823.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/24822
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24822/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24822/comments
https://api.github.com/repos/huggingface/transformers/issues/24822/events
https://github.com/huggingface/transformers/pull/24822
1,804,805,141
PR_kwDOCUB6oc5VhBiH
24,822
Generate: sequence bias can handle same terminations
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hi @gante, \r\n\r\nThank you for trying to fix it. The failure is still there with this PR's branch.\r\n\r\n```\r\n File \"/mnt/nvme0/code/huggingface/m4-master/m4/evaluation/launch.py\", line 143, in <module>\r\n main(args)\r\n File \"/mnt/nvme0/code/huggingface/m4-master/m4/evaluation/launch.py\", line 97, in main\r\n score = evaluator(task, accelerator, model, args)\r\n File \"/mnt/nvme0/code/huggingface/m4-master/m4/evaluation/evaluators/in_contexter.py\", line 263, in in_contexter\r\n metric = task.add_batch_metric(metric, **kwargs)\r\n File \"/mnt/nvme0/code/huggingface/m4-master/m4/models/vgpt2/evaluation_open_ended_vqa_in_context_vgpt2.py\", line 336, in add_batch_metric\r\n generated_tokens = self.generate_tokens(**kwargs)\r\n File \"/mnt/nvme0/code/huggingface/m4-master/m4/models/vgpt2/evaluation_open_ended_vqa_in_context_vgpt2.py\", line 312, in generate_tokens\r\n generated_tokens = unwrapped_model.generate(\r\n File \"/home/stas/anaconda3/envs/py38-pt20/lib/python3.8/site-packages/torch/utils/_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/generation/utils.py\", line 1613, in generate\r\n return self.beam_search(\r\n File \"/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/generation/utils.py\", line 2930, in beam_search\r\n next_token_scores_processed = logits_processor(input_ids, next_token_scores)\r\n File \"/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/generation/logits_process.py\", line 92, in __call__\r\n scores = processor(input_ids, scores)\r\n File \"/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/generation/logits_process.py\", line 618, in __call__\r\n self._prepare_bias_variables(scores)\r\n File \"/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/generation/logits_process.py\", line 674, in _prepare_bias_variables\r\n raise ValueError(\r\nValueError: Setting a bias on sequences that share a common token termination is not yet supported. Please open an issue if you see this error message (after checking that it doesn't already exist).\r\n```", "@stas00 that is odd -- the exception was entirely removed ๐Ÿ‘€ \r\n\r\nOn my end, I can't reach the exception as before. Can I kindly request to try again? ๐Ÿค— (and, if it still fails, may I have some way to reproduce it?)", "ah, my bad. somehow I got the wrong branch I think. I have retested with the latest of this PR and the problem is no more. Thank you for the fix, Joao\r\n\r\noh, I know. I was on the branch of the original PR #24334 - hence the confusion" ]
1,689
1,689
1,689
MEMBER
null
# What does this PR do? Fixes the issue raised by @stas00 [here](https://github.com/huggingface/transformers/pull/24334#issuecomment-1631324670). In a nutshell, when I designed `SequenceBiasLogitsProcessor`, I've committed the fallacy of early optimization -- the resulting solution was not compatible with biasing sequences that had the same termination. This PR opts for a simpler solution that is more inclusive (but probably slower). Existing tests and docstring example passing ๐Ÿ‘
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24822/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24822/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/24822", "html_url": "https://github.com/huggingface/transformers/pull/24822", "diff_url": "https://github.com/huggingface/transformers/pull/24822.diff", "patch_url": "https://github.com/huggingface/transformers/pull/24822.patch", "merged_at": 1689852198000 }
https://api.github.com/repos/huggingface/transformers/issues/24821
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24821/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24821/comments
https://api.github.com/repos/huggingface/transformers/issues/24821/events
https://github.com/huggingface/transformers/issues/24821
1,804,796,489
I_kwDOCUB6oc5rkwJJ
24,821
OpenAIGPTModel raises exception with `afn="Relu"`
{ "login": "nathan-chappell", "id": 36384302, "node_id": "MDQ6VXNlcjM2Mzg0MzAy", "avatar_url": "https://avatars.githubusercontent.com/u/36384302?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nathan-chappell", "html_url": "https://github.com/nathan-chappell", "followers_url": "https://api.github.com/users/nathan-chappell/followers", "following_url": "https://api.github.com/users/nathan-chappell/following{/other_user}", "gists_url": "https://api.github.com/users/nathan-chappell/gists{/gist_id}", "starred_url": "https://api.github.com/users/nathan-chappell/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nathan-chappell/subscriptions", "organizations_url": "https://api.github.com/users/nathan-chappell/orgs", "repos_url": "https://api.github.com/users/nathan-chappell/repos", "events_url": "https://api.github.com/users/nathan-chappell/events{/privacy}", "received_events_url": "https://api.github.com/users/nathan-chappell/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey thanks for reporting and providing a very efficient snippet! Indeed the relu is not initialized thus you have the error. Reviewing your PR now", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,689
1,692
1,692
NONE
null
### System Info - `transformers` version: 4.30.2 - Platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python from transformers.models.openai import OpenAIGPTConfig, OpenAIGPTModel OpenAIGPTModel(OpenAIGPTConfig(n_vocab=1, n_embed=1, afn="relu"))(torch.eye(1, dtype=int)) # raises: # AttributeError: 'ReLU' object has no attribute 'size' ``` ### Expected behavior I would expect no error to be raised and the calculation to be performed. For example, the following code behaves as expected: ```python OpenAIGPTModel(OpenAIGPTConfig(n_vocab=1, n_embed=1, afn="gelu"))(torch.eye(1, dtype=int)) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24821/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24821/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24820
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24820/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24820/comments
https://api.github.com/repos/huggingface/transformers/issues/24820/events
https://github.com/huggingface/transformers/issues/24820
1,804,488,797
I_kwDOCUB6oc5rjlBd
24,820
Extra else in GPTNeo prepare_inputs_for_generation
{ "login": "namespace-Pt", "id": 61188463, "node_id": "MDQ6VXNlcjYxMTg4NDYz", "avatar_url": "https://avatars.githubusercontent.com/u/61188463?v=4", "gravatar_id": "", "url": "https://api.github.com/users/namespace-Pt", "html_url": "https://github.com/namespace-Pt", "followers_url": "https://api.github.com/users/namespace-Pt/followers", "following_url": "https://api.github.com/users/namespace-Pt/following{/other_user}", "gists_url": "https://api.github.com/users/namespace-Pt/gists{/gist_id}", "starred_url": "https://api.github.com/users/namespace-Pt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/namespace-Pt/subscriptions", "organizations_url": "https://api.github.com/users/namespace-Pt/orgs", "repos_url": "https://api.github.com/users/namespace-Pt/repos", "events_url": "https://api.github.com/users/namespace-Pt/events{/privacy}", "received_events_url": "https://api.github.com/users/namespace-Pt/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "My bad. I think one should override the `prepare_inputs_for_generation` method if he wants to customize position_ids. The current code is okay for general use cases. But according to recent code of LLaMA https://github.com/huggingface/transformers/blob/fe861e578f50dc9c06de33cd361d2f625017e624/src/transformers/models/llama/modeling_llama.py#L740, the `else` should be deleted anyway.", "Hi @namespace-Pt, thanks for raising this issue. \r\n\r\nYes, the great thing about open source code is anyone can build upon and adapt it to their needs! \r\n\r\nWrt comparing to the code with Llama, these are different models, and so there might be different assumptions about the inputs for generation; sometimes, even if the assumptions are the same, one model was implemented at a different time and the logic differs for backwards compatibility reasons; and sometimes it's just an oversight on our part :) \r\n\r\ncc @gante for reference :) ", "@namespace-Pt ๐Ÿ‘‹ \r\n\r\nI think the `else` is indeed not needed. Would you like to open a PR to fix it? ๐Ÿค— ", "Create pull request in https://github.com/huggingface/transformers/pull/24826" ]
1,689
1,689
1,689
CONTRIBUTOR
null
https://github.com/huggingface/transformers/blob/fe861e578f50dc9c06de33cd361d2f625017e624/src/transformers/models/gpt_neo/modeling_gpt_neo.py#L704 Here shouldn't be an extra `else`. Sometimes users want to input customized position_ids and attention_masks, however this `else` eliminates such practice.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24820/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24820/timeline
completed
null
null