url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
โ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
โ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/24618
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24618/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24618/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24618/events
|
https://github.com/huggingface/transformers/pull/24618
| 1,784,177,542 |
PR_kwDOCUB6oc5Ua4O-
| 24,618 |
precompiled_charsmap checking before adding to the normalizers' list for XLNetTokenizerFast conversion.
|
{
"login": "shahad-mahmud",
"id": 29411624,
"node_id": "MDQ6VXNlcjI5NDExNjI0",
"avatar_url": "https://avatars.githubusercontent.com/u/29411624?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shahad-mahmud",
"html_url": "https://github.com/shahad-mahmud",
"followers_url": "https://api.github.com/users/shahad-mahmud/followers",
"following_url": "https://api.github.com/users/shahad-mahmud/following{/other_user}",
"gists_url": "https://api.github.com/users/shahad-mahmud/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shahad-mahmud/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shahad-mahmud/subscriptions",
"organizations_url": "https://api.github.com/users/shahad-mahmud/orgs",
"repos_url": "https://api.github.com/users/shahad-mahmud/repos",
"events_url": "https://api.github.com/users/shahad-mahmud/events{/privacy}",
"received_events_url": "https://api.github.com/users/shahad-mahmud/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hey @ArthurZucker I have incorporated the changes so that it supports all the models. \r\n\r\nIt is failing in one test case and it seems the test case is not related to this PR. From details I see that it fails due to the `module 'PIL.Image' has no attribute 'LINEAR'` error. Can it be related to the module or environment where the test is running? Do I need to work on this test case for this PR?",
"For the `test_exotic_models tests`, a fix, pinning the Pillow version has now been merged into main. Could you rebase to include these and trigger a re-run of the CI?",
"Hey @ArthurZucker and @amyeroberts all tests passed after the rebase. Can you have a look?",
"Perfect! Thanks for addressing this and contributing! ๐ค "
] | 1,688 | 1,688 | 1,688 |
CONTRIBUTOR
| null |
# What does this PR do?
There is a small change to check the `precompiled_charsmap` during the conversion of a slow tokenizer to `XLNetTokenizerFast`. It will check if the `precompiled_charsmap` is empty before initializing `normalizers.Precompiled` from the tokenizers library. If a [Sentencepiece](https://github.com/google/sentencepiece) tokenizer model is trained with `identity` normalization rule, i.e. no normalization is applied, it fails to initialize a XLNetTokenizerFast as discussed in issue #24616. This PR solves this issue.
Fixes #24616
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ArthurZucker
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24618/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24618/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24618",
"html_url": "https://github.com/huggingface/transformers/pull/24618",
"diff_url": "https://github.com/huggingface/transformers/pull/24618.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24618.patch",
"merged_at": 1688431903000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24617
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24617/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24617/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24617/events
|
https://github.com/huggingface/transformers/issues/24617
| 1,784,168,634 |
I_kwDOCUB6oc5qWEC6
| 24,617 |
Seems some Bart models from facebook are removed
|
{
"login": "com3dian",
"id": 57277626,
"node_id": "MDQ6VXNlcjU3Mjc3NjI2",
"avatar_url": "https://avatars.githubusercontent.com/u/57277626?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/com3dian",
"html_url": "https://github.com/com3dian",
"followers_url": "https://api.github.com/users/com3dian/followers",
"following_url": "https://api.github.com/users/com3dian/following{/other_user}",
"gists_url": "https://api.github.com/users/com3dian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/com3dian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/com3dian/subscriptions",
"organizations_url": "https://api.github.com/users/com3dian/orgs",
"repos_url": "https://api.github.com/users/com3dian/repos",
"events_url": "https://api.github.com/users/com3dian/events{/privacy}",
"received_events_url": "https://api.github.com/users/com3dian/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"They're back up now! ๐ค",
"Closing as this has been resolved! "
] | 1,688 | 1,688 | 1,688 |
NONE
| null |
### System Info
No response.
### Who can help?
@ArthurZucker @YouJiacheng @sgugger @stevhliu @MKhalusova
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I found that some Bart models ([facebook/Bart-large](https://huggingface.co/facebook/bart-large), [facebook/Bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn), [facebook/Bart-base](https://huggingface.co/facebook/bart-base), ...) are removed from the hub recently, These models were previously referenced in the Bart documentation. Consequently, I recommend updating the sample scripts to use alternative models.
For example if I run the first [example](https://github.com/huggingface/transformers/blob/66ded238cd04e29ba98485984dd647e7d37d1603/docs/source/en/model_doc/bart.md?plain=1#L88-L101) in the Bart [doc page](https://huggingface.co/docs/transformers/model_doc/bart),
https://github.com/huggingface/transformers/blob/66ded238cd04e29ba98485984dd647e7d37d1603/docs/source/en/model_doc/bart.md?plain=1#L88-L101
it gives me the following error
```shell
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ /opt/conda/lib/python3.7/site-packages/huggingface_hub/utils/_errors.py:259 in โ
โ hf_raise_for_status โ
โ โ
โ 256 โ </Tip> โ
โ 257 โ """ โ
โ 258 โ try: โ
โ โฑ 259 โ โ response.raise_for_status() โ
โ 260 โ except HTTPError as e: โ
โ 261 โ โ error_code = response.headers.get("X-Error-Code") โ
โ 262 โ
โ โ
โ /opt/conda/lib/python3.7/site-packages/requests/models.py:1021 in raise_for_status โ
โ โ
โ 1018 โ โ โ ) โ
โ 1019 โ โ โ
โ 1020 โ โ if http_error_msg: โ
โ โฑ 1021 โ โ โ raise HTTPError(http_error_msg, response=self) โ
โ 1022 โ โ
โ 1023 โ def close(self): โ
โ 1024 โ โ """Releases the connection back to the pool. Once this method has been โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
HTTPError: 401 Client Error: Unauthorized for url:
https://huggingface.co/facebook/bart-large/resolve/main/config.json
The above exception was the direct cause of the following exception:
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ /opt/conda/lib/python3.7/site-packages/transformers/utils/hub.py:420 in cached_file โ
โ โ
โ 417 โ โ โ proxies=proxies, โ
โ 418 โ โ โ resume_download=resume_download, โ
โ 419 โ โ โ use_auth_token=use_auth_token, โ
โ โฑ 420 โ โ โ local_files_only=local_files_only, โ
โ 421 โ โ ) โ
โ 422 โ โ
โ 423 โ except RepositoryNotFoundError: โ
โ โ
โ /opt/conda/lib/python3.7/site-packages/huggingface_hub/utils/_validators.py:120 in _inner_fn โ
โ โ
โ 117 โ โ if check_use_auth_token: โ
โ 118 โ โ โ kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=ha โ
โ 119 โ โ โ
โ โฑ 120 โ โ return fn(*args, **kwargs) โ
โ 121 โ โ
โ 122 โ return _inner_fn # type: ignore โ
โ 123 โ
โ โ
โ /opt/conda/lib/python3.7/site-packages/huggingface_hub/file_download.py:1170 in hf_hub_download โ
โ โ
โ 1167 โ โ โ โ โ url=url, โ
โ 1168 โ โ โ โ โ token=token, โ
โ 1169 โ โ โ โ โ proxies=proxies, โ
โ โฑ 1170 โ โ โ โ โ timeout=etag_timeout, โ
โ 1171 โ โ โ โ ) โ
โ 1172 โ โ โ except EntryNotFoundError as http_error: โ
โ 1173 โ โ โ โ # Cache the non-existence of the file and raise โ
โ โ
โ /opt/conda/lib/python3.7/site-packages/huggingface_hub/utils/_validators.py:120 in _inner_fn โ
โ โ
โ 117 โ โ if check_use_auth_token: โ
โ 118 โ โ โ kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=ha โ
โ 119 โ โ โ
โ โฑ 120 โ โ return fn(*args, **kwargs) โ
โ 121 โ โ
โ 122 โ return _inner_fn # type: ignore โ
โ 123 โ
โ โ
โ /opt/conda/lib/python3.7/site-packages/huggingface_hub/file_download.py:1507 in โ
โ get_hf_file_metadata โ
โ โ
โ 1504 โ โ proxies=proxies, โ
โ 1505 โ โ timeout=timeout, โ
โ 1506 โ ) โ
โ โฑ 1507 โ hf_raise_for_status(r) โ
โ 1508 โ โ
โ 1509 โ # Return โ
โ 1510 โ return HfFileMetadata( โ
โ โ
โ /opt/conda/lib/python3.7/site-packages/huggingface_hub/utils/_errors.py:291 in โ
โ hf_raise_for_status โ
โ โ
โ 288 โ โ โ โ " `repo_type`.\nIf you are trying to access a private or gated repo," โ
โ 289 โ โ โ โ " make sure you are authenticated." โ
โ 290 โ โ โ ) โ
โ โฑ 291 โ โ โ raise RepositoryNotFoundError(message, response) from e โ
โ 292 โ โ โ
โ 293 โ โ elif response.status_code == 400: โ
โ 294 โ โ โ message = ( โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-64a075a2-6fc983a27bdf3f765f2f8757)
Repository Not Found for url: https://huggingface.co/facebook/bart-large/resolve/main/config.json.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated.
Invalid username or password.
During handling of the above exception, another exception occurred:
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ in <module>:3 โ
โ โ
โ 1 from transformers import BartForConditionalGeneration, BartTokenizer โ
โ 2 โ
โ โฑ 3 model = BartForConditionalGeneration.from_pretrained("facebook/bart-large", forced_bos_t โ
โ 4 tok = BartTokenizer.from_pretrained("facebook/bart-large") โ
โ 5 example_english_phrase = "UN Chief Says There Is No <mask> in Syria" โ
โ 6 batch = tok(example_english_phrase, return_tensors="pt") โ
โ โ
โ /opt/conda/lib/python3.7/site-packages/transformers/modeling_utils.py:2282 in from_pretrained โ
โ โ
โ 2279 โ โ โ โ subfolder=subfolder, โ
โ 2280 โ โ โ โ _from_auto=from_auto_class, โ
โ 2281 โ โ โ โ _from_pipeline=from_pipeline, โ
โ โฑ 2282 โ โ โ โ **kwargs, โ
โ 2283 โ โ โ ) โ
โ 2284 โ โ else: โ
โ 2285 โ โ โ model_kwargs = kwargs โ
โ โ
โ /opt/conda/lib/python3.7/site-packages/transformers/configuration_utils.py:547 in โ
โ from_pretrained โ
โ โ
โ 544 โ โ assert config.output_attentions == True โ
โ 545 โ โ assert unused_kwargs == {"foo": False} โ
โ 546 โ โ ```""" โ
โ โฑ 547 โ โ config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwarg โ
โ 548 โ โ if "model_type" in config_dict and hasattr(cls, "model_type") and config_dict["m โ
โ 549 โ โ โ logger.warning( โ
โ 550 โ โ โ โ f"You are using a model of type {config_dict['model_type']} to instantia โ
โ โ
โ /opt/conda/lib/python3.7/site-packages/transformers/configuration_utils.py:574 in โ
โ get_config_dict โ
โ โ
โ 571 โ โ """ โ
โ 572 โ โ original_kwargs = copy.deepcopy(kwargs) โ
โ 573 โ โ # Get config dict associated with the base config file โ
โ โฑ 574 โ โ config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwar โ
โ 575 โ โ if "_commit_hash" in config_dict: โ
โ 576 โ โ โ original_kwargs["_commit_hash"] = config_dict["_commit_hash"] โ
โ 577 โ
โ โ
โ /opt/conda/lib/python3.7/site-packages/transformers/configuration_utils.py:641 in โ
โ _get_config_dict โ
โ โ
โ 638 โ โ โ โ โ user_agent=user_agent, โ
โ 639 โ โ โ โ โ revision=revision, โ
โ 640 โ โ โ โ โ subfolder=subfolder, โ
โ โฑ 641 โ โ โ โ โ _commit_hash=commit_hash, โ
โ 642 โ โ โ โ ) โ
โ 643 โ โ โ โ commit_hash = extract_commit_hash(resolved_config_file, commit_hash) โ
โ 644 โ โ โ except EnvironmentError: โ
โ โ
โ /opt/conda/lib/python3.7/site-packages/transformers/utils/hub.py:425 in cached_file โ
โ โ
โ 422 โ โ
โ 423 โ except RepositoryNotFoundError: โ
โ 424 โ โ raise EnvironmentError( โ
โ โฑ 425 โ โ โ f"{path_or_repo_id} is not a local folder and is not a valid model identifie โ
โ 426 โ โ โ "listed on 'https://huggingface.co/models'\nIf this is a private repository, โ
โ 427 โ โ โ "pass a token having permission to this repo with `use_auth_token` or log in โ
โ 428 โ โ โ "`huggingface-cli login` and pass `use_auth_token=True`." โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
OSError: facebook/bart-large is not a local folder and is not a valid model identifier listed on
'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or
log in with `huggingface-cli login` and pass `use_auth_token=True`.
```
### Expected behavior
None output should be given.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24617/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 3,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24617/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24616
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24616/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24616/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24616/events
|
https://github.com/huggingface/transformers/issues/24616
| 1,784,161,160 |
I_kwDOCUB6oc5qWCOI
| 24,616 |
XLNetTokenizerFast conversion fails with identity normalization in Sentencepiece tokenizer
|
{
"login": "shahad-mahmud",
"id": 29411624,
"node_id": "MDQ6VXNlcjI5NDExNjI0",
"avatar_url": "https://avatars.githubusercontent.com/u/29411624?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shahad-mahmud",
"html_url": "https://github.com/shahad-mahmud",
"followers_url": "https://api.github.com/users/shahad-mahmud/followers",
"following_url": "https://api.github.com/users/shahad-mahmud/following{/other_user}",
"gists_url": "https://api.github.com/users/shahad-mahmud/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shahad-mahmud/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shahad-mahmud/subscriptions",
"organizations_url": "https://api.github.com/users/shahad-mahmud/orgs",
"repos_url": "https://api.github.com/users/shahad-mahmud/repos",
"events_url": "https://api.github.com/users/shahad-mahmud/events{/privacy}",
"received_events_url": "https://api.github.com/users/shahad-mahmud/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"To my mind, the bug can be fixed with a checking of the precompiled charmap like the following code snippet:\r\n\r\n```python\r\nprecompiled_charsmap = proto.normalizer_spec.precompiled_charsmap\r\n \r\nif precompiled_charsmap:\r\n list_normalizers.append(normalizers.Precompiled(precompiled_charsmap))\r\n```\r\n\r\nI am creating a pull request with this checking."
] | 1,688 | 1,688 | 1,688 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.19.0-46-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- Huggingface_hub version: 0.15.1
- PyTorch version (GPU?): 1.12.1+cu102 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.6.3 (cpu)
- Jax version: 0.4.1
- JaxLib version: 0.4.1
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZ
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I was trying to initialize an XLNetTokenizerFast tokenizer using a Sentencepiece tokenizer model. While training the Sentencepiece tokenizer, I used the `identity` normalization rule name as I did not want to normalize the texts. While initializing XLNetTokenizerFast using this Sentencepiece tokenizer, it fails and raises the following error:
```bash
Traceback (most recent call last):
File "xlnet_tok_test.py", line 10, in <module>
tokenizer = transformers.XLNetTokenizerFast(
File "/home/shahad/miniconda3/envs/gen/lib/python3.8/site-packages/transformers/models/xlnet/tokenization_xlnet_fast.py", line 150, in __init__
super().__init__(
File "/home/shahad/miniconda3/envs/gen/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py", line 118, in __init__
fast_tokenizer = convert_slow_tokenizer(slow_tokenizer)
File "/home/shahad/miniconda3/envs/gen/lib/python3.8/site-packages/transformers/convert_slow_tokenizer.py", line 1162, in convert_slow_tokenizer
return converter_class(transformer_tokenizer).converted()
File "/home/shahad/miniconda3/envs/gen/lib/python3.8/site-packages/transformers/convert_slow_tokenizer.py", line 503, in converted
tokenizer.normalizer = self.normalizer(self.proto)
File "/home/shahad/miniconda3/envs/gen/lib/python3.8/site-packages/transformers/convert_slow_tokenizer.py", line 786, in normalizer
list_normalizers.append(normalizers.Precompiled(precompiled_charsmap))
Exception: Error while attempting to build Precompiled normalizer: Cannot parse precompiled_charsmap
```
However, I can successfully initialize XLNetTokenizerFast when the Sentencepiece tokenizer is trained with `nfkc` or the default `nmt_nfkc` normalization rule.
This bug can be reproduces using the following colab notebook:
https://colab.research.google.com/drive/1kj17NAP3xn22MEwp_96eNBLYg5d5np9u?usp=sharing
### Expected behavior
The XLNetTokenizerFast should be initialized without any error.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24616/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24616/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24615
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24615/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24615/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24615/events
|
https://github.com/huggingface/transformers/issues/24615
| 1,784,154,098 |
I_kwDOCUB6oc5qWAfy
| 24,615 |
Cannot load BART model
|
{
"login": "Onkar-2803",
"id": 60955023,
"node_id": "MDQ6VXNlcjYwOTU1MDIz",
"avatar_url": "https://avatars.githubusercontent.com/u/60955023?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Onkar-2803",
"html_url": "https://github.com/Onkar-2803",
"followers_url": "https://api.github.com/users/Onkar-2803/followers",
"following_url": "https://api.github.com/users/Onkar-2803/following{/other_user}",
"gists_url": "https://api.github.com/users/Onkar-2803/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Onkar-2803/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Onkar-2803/subscriptions",
"organizations_url": "https://api.github.com/users/Onkar-2803/orgs",
"repos_url": "https://api.github.com/users/Onkar-2803/repos",
"events_url": "https://api.github.com/users/Onkar-2803/events{/privacy}",
"received_events_url": "https://api.github.com/users/Onkar-2803/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"It seems like Facebook just disappeared from HuggingFace. Still waiting for something back...",
"Yeah, no models & datasets visible.",
"https://huggingface.co/facebook/bart-large is back up now ๐ค ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,688 | 1,690 | 1,690 |
NONE
| null |
Trying to load the BART model as specified on the [website](https://huggingface.co/docs/transformers/model_doc/bart#mask-filling:~:text=Mask%20Filling-,The%20facebook,-/bart%2Dbase%20and) with the following code:
`model = BartForConditionalGeneration.from_pretrained("facebook/bart-large", forced_bos_token_id=0`
Error: facebook/bart-large is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`.
@patrickvonplaten
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24615/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 4
}
|
https://api.github.com/repos/huggingface/transformers/issues/24615/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24614
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24614/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24614/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24614/events
|
https://github.com/huggingface/transformers/pull/24614
| 1,784,116,851 |
PR_kwDOCUB6oc5UarEi
| 24,614 |
[DOC] Clarify relationshi load_best_model_at_end and save_total_limit
|
{
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Can you just do a quick rebase/merge on the main branch? I think the issue in the tests (due to a release of Pillow apparently) is fixed on main."
] | 1,688 | 1,689 | 1,689 |
COLLABORATOR
| null |
Clarify the relationship between `load_best_model_at_end` and `save_total_limit`. Hope this is clear.
As discussed on Slack @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24614/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24614/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24614",
"html_url": "https://github.com/huggingface/transformers/pull/24614",
"diff_url": "https://github.com/huggingface/transformers/pull/24614.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24614.patch",
"merged_at": 1689248177000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24613
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24613/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24613/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24613/events
|
https://github.com/huggingface/transformers/issues/24613
| 1,783,910,645 |
I_kwDOCUB6oc5qVFD1
| 24,613 |
Fine-tune T5 on SQuAD
|
{
"login": "h1326889",
"id": 138305300,
"node_id": "U_kgDOCD5fFA",
"avatar_url": "https://avatars.githubusercontent.com/u/138305300?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h1326889",
"html_url": "https://github.com/h1326889",
"followers_url": "https://api.github.com/users/h1326889/followers",
"following_url": "https://api.github.com/users/h1326889/following{/other_user}",
"gists_url": "https://api.github.com/users/h1326889/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h1326889/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h1326889/subscriptions",
"organizations_url": "https://api.github.com/users/h1326889/orgs",
"repos_url": "https://api.github.com/users/h1326889/repos",
"events_url": "https://api.github.com/users/h1326889/events{/privacy}",
"received_events_url": "https://api.github.com/users/h1326889/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"I didn't verify manually, but I think you have to modify \r\n\r\nhttps://github.com/huggingface/transformers/blob/fc7ce2ebc52eccd8158a7feeeee11eb44f964937/examples/pytorch/question-answering/run_seq2seq_qa.py#L695-L706\r\n\r\nin order to save the prediction (generation).",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,688 | 1,691 | 1,691 |
NONE
| null |
### System Info
I was trying to use the official command to evaluate T5 on SQuAD data, but where can I find the prediction file that contains the actual answer T5 generated?
python run_seq2seq_qa.py \
--model_name_or_path t5-small \
--dataset_name squad \
--context_column context \
--question_column question \
--answer_column answers \
--do_eval \
--max_seq_length 384 \
--doc_stride 128 \
--predict_with_generate \
--output_dir /tmp/debug_seq2seq_squad/
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
NA
### Expected behavior
find the prediction file
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24613/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24613/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24612
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24612/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24612/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24612/events
|
https://github.com/huggingface/transformers/issues/24612
| 1,783,821,193 |
I_kwDOCUB6oc5qUvOJ
| 24,612 |
ValueError: An instance of tokenizer class BioGptTokenizer cannot be converted in a Fast tokenizer instance. No converter was found.
|
{
"login": "TekeshwarHirwani",
"id": 79913769,
"node_id": "MDQ6VXNlcjc5OTEzNzY5",
"avatar_url": "https://avatars.githubusercontent.com/u/79913769?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TekeshwarHirwani",
"html_url": "https://github.com/TekeshwarHirwani",
"followers_url": "https://api.github.com/users/TekeshwarHirwani/followers",
"following_url": "https://api.github.com/users/TekeshwarHirwani/following{/other_user}",
"gists_url": "https://api.github.com/users/TekeshwarHirwani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TekeshwarHirwani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TekeshwarHirwani/subscriptions",
"organizations_url": "https://api.github.com/users/TekeshwarHirwani/orgs",
"repos_url": "https://api.github.com/users/TekeshwarHirwani/repos",
"events_url": "https://api.github.com/users/TekeshwarHirwani/events{/privacy}",
"received_events_url": "https://api.github.com/users/TekeshwarHirwani/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] |
closed
| false | null |
[] |
[
"Hi @TekeshwarHirwani, thanks for raising this issue. \r\n\r\nIndeed, there doesn't exist a convert yet for this tokenizer. Would you like to add it? You can find examples of [converters here](https://github.com/huggingface/transformers/blob/main/src/transformers/convert_slow_tokenizer.py), and it will need to be added to the [SLOW_TO_FAST_CONVERTERS mapping](https://github.com/huggingface/transformers/blob/f4e4b4d0e2dc248433e808594f7595292037d891/src/transformers/convert_slow_tokenizer.py#L1230).\r\n\r\ncc @Rocketknight1 @ArthurZucker ",
"Also other models based on `moses` don't have a fast tokenizer version (`xlm`, `fsmt`, `flaubert` etc). It's probably because moses is already fast enough and `tokenizers` library is not really made for ruled base tokenization. Correct me if I am wrong @Narsil ",
"Seems very correct. The reasons to skip `moses` are vague to me but indeed I'm not sure we should go down that path :).\r\n",
"@TekeshwarHirwani A similar context was raised here: https://github.com/huggingface/transformers/pull/17254#issuecomment-1150669010 \r\n\r\nYou may first create a slow-to-fast-converter that is similar to PhobertConverter/BertweetConverter from https://github.com/datquocnguyen/transformers/blob/main/src/transformers/convert_slow_tokenizer.py \r\nThen you could create a BiogptTokenizerFast in the same manner to as PhobertTokenizerFast/BertweetTokenizerFast from https://github.com/datquocnguyen/transformers/blob/main/src/transformers/models/bertweet/tokenization_bertweet_fast.py \r\n\r\nSee more details [here](https://github.com/huggingface/transformers/pull/17254/files).",
"I am not really sure why `moses` was mentioned here @ArthurZucker @Narsil @amyeroberts \r\nThe reason why you'd have to **hack** the `tokenizers` to have a fast variant of such slow tokenizers for FlauBERT or BioGPT is that [many subwords appearing in the \"merges\" file do not appear in the \"vocab\" file as in CTRL, FlauBERT, BioGPT, PhoBERT and BERTweet and the like (i.e. slow tokenizers would convert those subwords into unkn_id during encoding), thus it is impossible to develop a fast tokenizer variant using documented approaches while keeping the same tokenization strategy](https://github.com/huggingface/transformers/pull/17254#issuecomment-1150669010).",
"Thanks for the link.\r\nI'm confused in the thread you linked you say that fast tokenizers are possible: https://github.com/huggingface/transformers/pull/17254#issuecomment-1130248921 and I see one here: https://huggingface.co/vinai/bertweet-covid19-base-uncased/blob/main/tokenizer.json\r\n\r\nThis lead me to check BiotGPT **does** use moses: https://github.com/huggingface/transformers/blob/main/src/transformers/models/biogpt/tokenization_biogpt.py#L156-L162\r\nFlaubert **uses it too**: https://github.com/huggingface/transformers/blob/main/src/transformers/models/flaubert/tokenization_flaubert.py#L312-L318\r\nWhile CTRL **doesn't**: https://github.com/huggingface/transformers/blob/main/src/transformers/models/ctrl/tokenization_ctrl.py\r\nAnd phobert indeed **doesn't** : https://github.com/huggingface/transformers/blob/main/src/transformers/models/phobert/tokenization_phobert.py\r\n\r\nSo phobert might be doable, but I'm not sure it's related to BioGPT\r\n",
"@Narsil Please be aware of the difference between \"subword\" tokenization vs. \"word\" tokenization. \r\n\r\nAll the `tokenization_{model_name}.py` files you mentioned use \"bpe\" for `subword` tokenization, e.g. [https://github.com/huggingface/transformers/blob/2ab75add4b30c2fc44a8bf575156d448d9ed87a7/src/transformers/models/biogpt/tokenization_biogpt.py#L170](https://github.com/huggingface/transformers/blob/2ab75add4b30c2fc44a8bf575156d448d9ed87a7/src/transformers/models/biogpt/tokenization_biogpt.py#L170)\r\n\r\nBiotGPT, Flaubert, CTRL, PhoBERT and BERTweet all have \"merges\" and \"vocab\" files for BPE-based subword tokenization (e.g. see https://huggingface.co/microsoft/biogpt/tree/main). \r\n\r\nFor BiotGPT and Flaubert, `mosestokenizer` is just a `word` tokenizer/normalizer, which can be used as an external preprocess w.r.t. a fast `subword` tokenization variant (likewise, to perform Vietnamese word segmentation before using PhobertTokenizerFast, or to perform Tweet normalization before using BertweetTokenizerFast). \r\n\r\nPS: https://huggingface.co/vinai/bertweet-covid19-base-uncased/blob/main/tokenizer.json is just a saved output of the [convert_slow_tokenizer.py](https://github.com/datquocnguyen/transformers/blob/main/src/transformers/convert_slow_tokenizer.py) that takes \"merges\" and \"vocab\" files as input. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,688 | 1,692 | 1,692 |
NONE
| null |
### System Info
ValueError: An instance of tokenizer class BioGptTokenizer cannot be converted in a Fast tokenizer instance. No converter was found.
I am using microsoft/biogpt for token classification ner task script(https://github.com/huggingface/transformers/blob/main/examples/pytorch/token-classification/run_ner.py ) which is having slow tokenizer and the journey been so far is this
1. Got an error in the above mentioned script as this ("This example script only works for models that have a fast tokenizer.")
2. And then went for second option that is mentioned which is to use old script In which I got an Runtime error and I reported the issue and got as an answer that I need to use new run_ner.py
3. I got an option to convert slow tokenizer to fast but now I am getting this error
"ValueError: An instance of tokenizer class BioGptTokenizer cannot be converted
in a Fast tokenizer instance. No converter was found. Currently available
slow->fast convertors: ['AlbertTokenizer', 'BartTokenizer', 'BarthezTokenizer',
'BertTokenizer', 'BigBirdTokenizer', 'BlenderbotTokenizer',
'CamembertTokenizer', 'CLIPTokenizer', 'CodeGenTokenizer', 'ConvBertTokenizer',
'DebertaTokenizer', 'DebertaV2Tokenizer', 'DistilBertTokenizer',
'DPRReaderTokenizer', 'DPRQuestionEncoderTokenizer',
'DPRContextEncoderTokenizer', 'ElectraTokenizer', 'FNetTokenizer',
'FunnelTokenizer', 'GPT2Tokenizer', 'HerbertTokenizer', 'LayoutLMTokenizer',
'LayoutLMv2Tokenizer', 'LayoutLMv3Tokenizer', 'LayoutXLMTokenizer',
'LongformerTokenizer', 'LEDTokenizer', 'LxmertTokenizer', 'MarkupLMTokenizer',
'MBartTokenizer', 'MBart50Tokenizer', 'MPNetTokenizer', 'MobileBertTokenizer',
'MvpTokenizer', 'NllbTokenizer', 'OpenAIGPTTokenizer', 'PegasusTokenizer',
'RealmTokenizer', 'ReformerTokenizer', 'RemBertTokenizer', 'RetriBertTokenizer',
'RobertaTokenizer', 'RoFormerTokenizer', 'SqueezeBertTokenizer', 'T5Tokenizer',
'WhisperTokenizer', 'XLMRobertaTokenizer', 'XLNetTokenizer',
'SplinterTokenizer', 'XGLMTokenizer', 'LlamaTokenizer']"
It is my request to team to add BioGptTokenizer in the list.
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
%cd /content/transformers/examples/pytorch/token-classification
!python run_ner.py \
--tokenizer_name microsoft/biogpt\
--model_name_or_path microsoft/biogpt\
--train_file /content/TRAIN.json \
--validation_file /content/DEV.json \
--test_file /content/DEV.json \
--output_dir $checkpoint_dir \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--do_train \
--do_eval \
--do_predict \
--num_train_epochs 4\
--evaluation_strategy epoch\
--task_name ner\
--overwrite_output_dir True\
--save_strategy epoch\
--ignore_mismatched_sizes=True
### Expected behavior
Successfully train after the conversion
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24612/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24612/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24611
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24611/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24611/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24611/events
|
https://github.com/huggingface/transformers/pull/24611
| 1,783,721,677 |
PR_kwDOCUB6oc5UZTKb
| 24,611 |
translate the English documentation into Chinese
|
{
"login": "liteli1987gmail",
"id": 59245973,
"node_id": "MDQ6VXNlcjU5MjQ1OTcz",
"avatar_url": "https://avatars.githubusercontent.com/u/59245973?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liteli1987gmail",
"html_url": "https://github.com/liteli1987gmail",
"followers_url": "https://api.github.com/users/liteli1987gmail/followers",
"following_url": "https://api.github.com/users/liteli1987gmail/following{/other_user}",
"gists_url": "https://api.github.com/users/liteli1987gmail/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liteli1987gmail/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liteli1987gmail/subscriptions",
"organizations_url": "https://api.github.com/users/liteli1987gmail/orgs",
"repos_url": "https://api.github.com/users/liteli1987gmail/repos",
"events_url": "https://api.github.com/users/liteli1987gmail/events{/privacy}",
"received_events_url": "https://api.github.com/users/liteli1987gmail/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @liteli1987gmail, thanks for opening this PR and starting the Chinese translation effort! \r\n\r\nIs there a corresponding github issue for this translation? We recommend opening an issue (following [this template](https://github.com/huggingface/transformers/blob/f4e4b4d0e2dc248433e808594f7595292037d891/.github/ISSUE_TEMPLATE/i18n.md#L4)) so that others can easily track progress and contribute. \r\n\r\nAs we see here, translating all of the pages at once creates a very large diff that isn't realistic for people to review. Could you instead have each of the pages listed in a checklist on the github issue, and then open a separate PR for each of those pages? ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,688 | 1,691 | 1,691 |
CONTRIBUTOR
| null |
# Aim
Aims to translate the English documentation into Chinese, making it easier for Chinese developers to read and reducing the difficulty of accessing the documentation for them.
zh_translate:
@chenglu
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24611/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24611/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24611",
"html_url": "https://github.com/huggingface/transformers/pull/24611",
"diff_url": "https://github.com/huggingface/transformers/pull/24611.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24611.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/24610
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24610/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24610/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24610/events
|
https://github.com/huggingface/transformers/issues/24610
| 1,783,481,004 |
I_kwDOCUB6oc5qTcKs
| 24,610 |
PreTrainedTokenizerFast - whitespace merge skipped
|
{
"login": "cmp-nct",
"id": 78893154,
"node_id": "MDQ6VXNlcjc4ODkzMTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/78893154?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cmp-nct",
"html_url": "https://github.com/cmp-nct",
"followers_url": "https://api.github.com/users/cmp-nct/followers",
"following_url": "https://api.github.com/users/cmp-nct/following{/other_user}",
"gists_url": "https://api.github.com/users/cmp-nct/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cmp-nct/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cmp-nct/subscriptions",
"organizations_url": "https://api.github.com/users/cmp-nct/orgs",
"repos_url": "https://api.github.com/users/cmp-nct/repos",
"events_url": "https://api.github.com/users/cmp-nct/events{/privacy}",
"received_events_url": "https://api.github.com/users/cmp-nct/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The behavior switches to what I expected if you disable pre_tokenizer->use_regex which ignores the rank and contains quite a bit of english grammar rules.\r\nNot sure if that regex snake should really be used by default, given the international use of tokenizers. (ironically TII has chosen it despite being in an arabic speaking country)"
] | 1,688 | 1,688 | 1,688 |
NONE
| null |
### System Info
@ArthurZucker
Most likely I'm wrong, been digging through tokenization for 10 hours in a row and quite new to the topic.
Vocab: https://huggingface.co/tiiuae/falcon-40b/blob/main/tokenizer.json
String: `"Hello World"`
Two spaces are in the merge list right on top at line 19
" W" is in line 128
Running this through the tokenizer (`tokenizer = PreTrainedTokenizerFast(tokenizer_file='tokenizer.json')`)
Falcon IDs:
` [9856, 204, 2889]`
Falcon tokens:
` ['Hello', 'ฤ ', 'ฤ World']`
What I expected:
```
9856 -> 'Hello'
258 -> ' '
12670 -> 'World'
```
From my understanding the two whitespaces form a rank 19 merge (the 2nd lowest one next to 'o r' at 12)
I most likely just misunderstand a special rule in BPE in relation to white space characters
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
import sys
from transformers import PreTrainedTokenizerFast
# Load the tokenizer
tokenizer = PreTrainedTokenizerFast(tokenizer_file='tokenizer.json')
# Define the string to tokenize
text = "Hello World"
# Check if a command line argument was provided
if len(sys.argv) > 1:
text = sys.argv[1]
# Tokenize the string
output = tokenizer.encode(text)
# Print the token IDs
print("Falcon IDs:\n\t", output)
tokens = tokenizer.convert_ids_to_tokens(output)
print("Falcon tokens:\n\t", tokens)
```
### Expected behavior
The two spaces form a higher rank than space + W so I'd expect this outcome
9856 -> 'Hello'
258 -> ' '
12670 -> 'World'
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24610/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24610/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24609
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24609/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24609/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24609/events
|
https://github.com/huggingface/transformers/pull/24609
| 1,783,384,109 |
PR_kwDOCUB6oc5UYGU5
| 24,609 |
Fix model referenced and results in documentation. Model mentioned was inaccessible
|
{
"login": "rafaelpadilla",
"id": 31217453,
"node_id": "MDQ6VXNlcjMxMjE3NDUz",
"avatar_url": "https://avatars.githubusercontent.com/u/31217453?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rafaelpadilla",
"html_url": "https://github.com/rafaelpadilla",
"followers_url": "https://api.github.com/users/rafaelpadilla/followers",
"following_url": "https://api.github.com/users/rafaelpadilla/following{/other_user}",
"gists_url": "https://api.github.com/users/rafaelpadilla/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rafaelpadilla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rafaelpadilla/subscriptions",
"organizations_url": "https://api.github.com/users/rafaelpadilla/orgs",
"repos_url": "https://api.github.com/users/rafaelpadilla/repos",
"events_url": "https://api.github.com/users/rafaelpadilla/events{/privacy}",
"received_events_url": "https://api.github.com/users/rafaelpadilla/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,688 | 1,688 | 1,688 |
CONTRIBUTOR
| null |
# What does this PR do?
This is a very small change on the documentation.
The mentioned model (`MariaK/detr-resnet-50_finetuned_cppe5`) was either removed or set to private. So I could not reproduce the shown example.
I basically reference the same model but from another user, which provides a slightly better result. That's why I also updated the metrics.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@amyeroberts @MKhalusova
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24609/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24609/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24609",
"html_url": "https://github.com/huggingface/transformers/pull/24609",
"diff_url": "https://github.com/huggingface/transformers/pull/24609.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24609.patch",
"merged_at": 1688574336000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24608
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24608/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24608/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24608/events
|
https://github.com/huggingface/transformers/issues/24608
| 1,783,342,783 |
I_kwDOCUB6oc5qS6a_
| 24,608 |
CUDA error: an illegal memory access was encountered
|
{
"login": "johnchienbronci",
"id": 27708347,
"node_id": "MDQ6VXNlcjI3NzA4MzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27708347?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/johnchienbronci",
"html_url": "https://github.com/johnchienbronci",
"followers_url": "https://api.github.com/users/johnchienbronci/followers",
"following_url": "https://api.github.com/users/johnchienbronci/following{/other_user}",
"gists_url": "https://api.github.com/users/johnchienbronci/gists{/gist_id}",
"starred_url": "https://api.github.com/users/johnchienbronci/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johnchienbronci/subscriptions",
"organizations_url": "https://api.github.com/users/johnchienbronci/orgs",
"repos_url": "https://api.github.com/users/johnchienbronci/repos",
"events_url": "https://api.github.com/users/johnchienbronci/events{/privacy}",
"received_events_url": "https://api.github.com/users/johnchienbronci/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"What's your deepspeed version. Probably try to upgrade it and check again.\r\n\r\nYou can follow [this issue page](https://github.com/microsoft/DeepSpeed/issues/3373).\r\n",
"ds_report\r\n```\r\nDeepSpeed general environment info:\r\ntorch install path ............... ['/home/ubuntu/.local/lib/python3.10/site-packages/torch']\r\ntorch version .................... 2.0.1+cu117\r\ndeepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed']\r\ndeepspeed info ................... 0.9.5, unknown, unknown\r\ntorch cuda version ............... 11.7\r\ntorch hip version ................ None\r\nnvcc version ..................... 11.5\r\ndeepspeed wheel compiled w. ...... torch 0.0, cuda 0.0\r\n```\r\n\r\nI have tried not using DeepSpeed, but the issue still occurs.",
"> I can fine-tune successfully using the Common Voice corpus\r\n\r\nIf the same way of launching the training works for one dataset but not for another, and we don't have access to the second dataset (your custom corpora), we won't be able to help unfortunately.",
"I think I can try debugging, but I don't have any ideas. Do you have any suggestions or directions?",
"This thread https://github.com/microsoft/DeepSpeed/issues/3373 is a better place. You can ask those people how they solved the issue.",
"ok, thanks",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,688 | 1,691 | 1,691 |
NONE
| null |
I encountered some errors when running the run_speech_recognition_ctc_streaming.sh by `deepspeed` ( `torchrun --nproc_per_node 1 ... `) and his issue consistently occurs with my custom corpora.
Does anyone have any ideas? (I can fine-tune successfully using the Common Voice corpus)
environment:
gpu number: 1
export CUDA_LAUNCH_BLOCKING=1
export TORCH_USE_CUDA_DSA=1
```
terminate called after throwing an instance of 'c10::Error'
what(): CUDA error: an illegal memory access was encountered
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Exception raised from c10_cuda_check_implementation at ../c10/cuda/CUDAException.cpp:44 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f7b400ef097 in /usr/local/lib/python3.10/dist-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) + 0x64 (0x7f7b400aaa33 in /usr/local/lib/python3.10/dist-packages/torch/lib/libc10.so)
frame #2: c10::cuda::c10_cuda_check_implementation(int, char const*, char const*, int, bool) + 0x118 (0x7f7b4019d5a8 in /usr/local/lib/python3.10/dist-packages/torch/lib/libc10_cuda.so)
frame #3: <unknown function> + 0x1f3de (0x7f7b401663de in /usr/local/lib/python3.10/dist-packages/torch/lib/libc10_cuda.so)
frame #4: <unknown function> + 0x22650 (0x7f7b40169650 in /usr/local/lib/python3.10/dist-packages/torch/lib/libc10_cuda.so)
frame #5: <unknown function> + 0x22a35 (0x7f7b40169a35 in /usr/local/lib/python3.10/dist-packages/torch/lib/libc10_cuda.so)
frame #6: <unknown function> + 0x4ef710 (0x7f7af1667710 in /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_python.so)
frame #7: c10::TensorImpl::~TensorImpl() + 0x1e3 (0x7f7b400cc393 in /usr/local/lib/python3.10/dist-packages/torch/lib/libc10.so)
frame #8: c10::TensorImpl::~TensorImpl() + 0x9 (0x7f7b400cc529 in /usr/local/lib/python3.10/dist-packages/torch/lib/libc10.so)
frame #9: <unknown function> + 0x7761b8 (0x7f7af18ee1b8 in /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_python.so)
frame #10: THPVariable_subclass_dealloc(_object*) + 0x2c6 (0x7f7af18ee506 in /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_python.so)
frame #11: <unknown function> + 0x1388e1 (0x5580685a58e1 in /usr/bin/python3)
frame #12: <unknown function> + 0x1386dc (0x5580685a56dc in /usr/bin/python3)
frame #13: <unknown function> + 0x138787 (0x5580685a5787 in /usr/bin/python3)
frame #14: <unknown function> + 0x174ac1 (0x5580685e1ac1 in /usr/bin/python3)
frame #15: <unknown function> + 0x153090 (0x5580685c0090 in /usr/bin/python3)
frame #16: <unknown function> + 0x166918 (0x5580685d3918 in /usr/bin/python3)
frame #17: <unknown function> + 0x2593a7 (0x5580686c63a7 in /usr/bin/python3)
frame #18: <unknown function> + 0x17a7b0 (0x5580685e77b0 in /usr/bin/python3)
frame #19: <unknown function> + 0x25f5c1 (0x5580686cc5c1 in /usr/bin/python3)
frame #20: _PyEval_EvalFrameDefault + 0x7a99 (0x5580685b9b49 in /usr/bin/python3)
frame #21: <unknown function> + 0x16ac31 (0x5580685d7c31 in /usr/bin/python3)
frame #22: PyObject_Call + 0x122 (0x5580685d88e2 in /usr/bin/python3)
frame #23: <unknown function> + 0x27c30c (0x5580686e930c in /usr/bin/python3)
frame #24: _PyObject_MakeTpCall + 0x25b (0x5580685c04ab in /usr/bin/python3)
frame #25: _PyEval_EvalFrameDefault + 0x1a2f (0x5580685b3adf in /usr/bin/python3)
frame #26: <unknown function> + 0x16ac31 (0x5580685d7c31 in /usr/bin/python3)
frame #27: _PyEval_EvalFrameDefault + 0x1a2f (0x5580685b3adf in /usr/bin/python3)
frame #28: _PyFunction_Vectorcall + 0x7c (0x5580685ca1ec in /usr/bin/python3)
frame #29: _PyEval_EvalFrameDefault + 0x6d5 (0x5580685b2785 in /usr/bin/python3)
frame #30: <unknown function> + 0x141ed6 (0x5580685aeed6 in /usr/bin/python3)
frame #31: PyEval_EvalCode + 0x86 (0x5580686a5366 in /usr/bin/python3)
frame #32: <unknown function> + 0x265108 (0x5580686d2108 in /usr/bin/python3)
frame #33: <unknown function> + 0x25df5b (0x5580686caf5b in /usr/bin/python3)
frame #34: <unknown function> + 0x264e55 (0x5580686d1e55 in /usr/bin/python3)
frame #35: _PyRun_SimpleFileObject + 0x1a8 (0x5580686d1338 in /usr/bin/python3)
frame #36: _PyRun_AnyFileObject + 0x43 (0x5580686d1033 in /usr/bin/python3)
frame #37: Py_RunMain + 0x2be (0x5580686c22de in /usr/bin/python3)
frame #38: Py_BytesMain + 0x2d (0x55806869832d in /usr/bin/python3)
frame #39: <unknown function> + 0x29d90 (0x7f7b5c24ad90 in /lib/x86_64-linux-gnu/libc.so.6)
frame #40: __libc_start_main + 0x80 (0x7f7b5c24ae40 in /lib/x86_64-linux-gnu/libc.so.6)
frame #41: _start + 0x25 (0x558068698225 in /usr/bin/python3)
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -6) local_rank: 0 (pid: 24134) of binary: /usr/bin/python3
```
This doesn't solve my problem by `pip3 install numpy --pre torch torchvision torchaudio --force-reinstall --index-url https://download.pytorch.org/whl/nightly/cu117`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24608/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24608/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24607
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24607/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24607/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24607/events
|
https://github.com/huggingface/transformers/issues/24607
| 1,783,289,006 |
I_kwDOCUB6oc5qStSu
| 24,607 |
`logging_dir` is not being generated.
|
{
"login": "vanbasten23",
"id": 5279639,
"node_id": "MDQ6VXNlcjUyNzk2Mzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5279639?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vanbasten23",
"html_url": "https://github.com/vanbasten23",
"followers_url": "https://api.github.com/users/vanbasten23/followers",
"following_url": "https://api.github.com/users/vanbasten23/following{/other_user}",
"gists_url": "https://api.github.com/users/vanbasten23/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vanbasten23/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vanbasten23/subscriptions",
"organizations_url": "https://api.github.com/users/vanbasten23/orgs",
"repos_url": "https://api.github.com/users/vanbasten23/repos",
"events_url": "https://api.github.com/users/vanbasten23/events{/privacy}",
"received_events_url": "https://api.github.com/users/vanbasten23/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @vanbasten23, \r\n\r\nDo you have tensorboard installed in your environment?\r\n\r\nCould you share the the running environment: run `transformers-cli env` in the terminal and copy-paste the output?",
"> Do you have tensorboard installed in your environment?\r\n\r\nThanks. It turns out once I installed tensorboard, the folder was generated. Btw, do you know how transformer writes to the tensorboard? I searched things like \"from torch.utils.tensorboard import SummaryWriter\" in the transformer codebase but I couldn't find any reference.\r\n",
"Writing logic is defined in the [TensorBoardCallback](https://github.com/huggingface/transformers/blob/7edc33ac7a2572698045fed3b5115bca23f40805/src/transformers/integrations.py#L573). \r\n\r\nThis is added by default as a reporting callback if tensorboard is in your environment [here](https://github.com/huggingface/transformers/blob/7edc33ac7a2572698045fed3b5115bca23f40805/src/transformers/trainer.py#L539). ",
"Thanks a lot @amyeroberts !"
] | 1,688 | 1,689 | 1,689 |
NONE
| null |
### System Info
Hi. I'm using huggingface model `bert base` on cloud TPU but it couldn't generate the `--logging_dir` that I expected.
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The script is at [here](https://github.com/GoogleCloudPlatform/ml-testing-accelerators/blob/master/tests/pytorch/nightly/hf-lm.libsonnet).
To reproduce, run these commands in a docker container on a TPU VM:
```
$ cd
$ git clone https://github.com/huggingface/transformers.git
$ cd transformers && pip install .
$ pip install datasets evaluate scikit-learn
$ python3 examples/pytorch/xla_spawn.py \
--num_cores 8 \
examples/pytorch/language-modeling/run_mlm.py \
--logging_dir ./tensorboard-metrics \
--cache_dir ./cache_dir \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--do_train \
--do_eval \
--overwrite_output_dir \
--output_dir language-modeling \
--logging_steps 30 \
--save_steps 3000 \
--overwrite_cache \
--debug tpu_metrics_debug \
--model_type=bert \
--model_name_or_path bert-base-cased \
--num_train_epochs 1 \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 16
```
### Expected behavior
I expect a folder `tensorboard-metrics` to be generated in `~/transformers` but I couldn't find it.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24607/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24607/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24606
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24606/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24606/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24606/events
|
https://github.com/huggingface/transformers/issues/24606
| 1,783,205,632 |
I_kwDOCUB6oc5qSY8A
| 24,606 |
RuntimeError: Could not infer dtype of NoneType
|
{
"login": "TekeshwarHirwani",
"id": 79913769,
"node_id": "MDQ6VXNlcjc5OTEzNzY5",
"avatar_url": "https://avatars.githubusercontent.com/u/79913769?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TekeshwarHirwani",
"html_url": "https://github.com/TekeshwarHirwani",
"followers_url": "https://api.github.com/users/TekeshwarHirwani/followers",
"following_url": "https://api.github.com/users/TekeshwarHirwani/following{/other_user}",
"gists_url": "https://api.github.com/users/TekeshwarHirwani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TekeshwarHirwani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TekeshwarHirwani/subscriptions",
"organizations_url": "https://api.github.com/users/TekeshwarHirwani/orgs",
"repos_url": "https://api.github.com/users/TekeshwarHirwani/repos",
"events_url": "https://api.github.com/users/TekeshwarHirwani/events{/privacy}",
"received_events_url": "https://api.github.com/users/TekeshwarHirwani/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @TekeshwarHirwani \r\n\r\nThe files under [transformers](https://github.com/huggingface/transformers/tree/main)/[examples](https://github.com/huggingface/transformers/tree/main/examples)/[legacy](https://github.com/huggingface/transformers/tree/main/examples/legacy) is no longer maintained: (from the README file)\r\n\r\n> This folder contains examples which are not actively maintained (mostly contributed by the community).\r\n\r\n> Using these examples together with a recent version of the library usually requires to make small (sometimes big) adaptations to get the scripts working.\r\n\r\nYou can use the files under [token-classification](https://github.com/huggingface/transformers/tree/main/examples/pytorch/token-classification) ๐ค ",
"Thankyou for response, But the model I am using is microsoft/biogpt and it is having slow tokenizer, readme file of examples/tokenclassification/pytorch you have mentioned this :\r\n\r\nNote: This script only works with models that have a fast tokenizer (backed by the ๐ค Tokenizers library) as it uses special features of those tokenizers. You can check if your favorite model has a fast tokenizer in [this table](https://huggingface.co/transformers/index.html#supported-frameworks), if it doesn't you can still use the old version of the script.\r\n",
"Hi @TekeshwarHirwani \r\n\r\nIn this situation, you will have to dive into the error messages, set some breakpoints, investigate the values of some variables to figure out why there is a None value. And see if you can modify the `legacy` code to make it work.\r\n\r\nYou can try [Hugging Face Forums](https://discuss.huggingface.co/) to see if someone else had the same issues and if there are already some approaches.",
"Thanks "
] | 1,688 | 1,688 | 1,688 |
NONE
| null |
### System Info
I was using microsoft/biogpt and gpt2 model on old script of run_ner.py that is (https://github.com/huggingface/transformers/blob/main/examples/legacy/token-classification/run_ner.py ) since both have slow tokenizer and they both encountered same error that is
RuntimeError: Could not infer dtype of NoneType
I have used the same dataset given in the repo and changed model name that's it.
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. change the model name to microsoft/biogpt
2. Follow the instruction given on the repo
### Expected behavior
it should train successfully
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24606/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24606/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24605
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24605/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24605/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24605/events
|
https://github.com/huggingface/transformers/pull/24605
| 1,783,175,660 |
PR_kwDOCUB6oc5UXYpt
| 24,605 |
Fix model referenced and results in documentation. Model mentioned was inaccessible.
|
{
"login": "rafaelpadilla",
"id": 31217453,
"node_id": "MDQ6VXNlcjMxMjE3NDUz",
"avatar_url": "https://avatars.githubusercontent.com/u/31217453?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rafaelpadilla",
"html_url": "https://github.com/rafaelpadilla",
"followers_url": "https://api.github.com/users/rafaelpadilla/followers",
"following_url": "https://api.github.com/users/rafaelpadilla/following{/other_user}",
"gists_url": "https://api.github.com/users/rafaelpadilla/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rafaelpadilla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rafaelpadilla/subscriptions",
"organizations_url": "https://api.github.com/users/rafaelpadilla/orgs",
"repos_url": "https://api.github.com/users/rafaelpadilla/repos",
"events_url": "https://api.github.com/users/rafaelpadilla/events{/privacy}",
"received_events_url": "https://api.github.com/users/rafaelpadilla/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,688 | 1,688 | 1,688 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
@amyeroberts @MKhalusova
This is a (really) small change on the documentation.
The mentioned model (`MariaK/detr-resnet-50_finetuned_cppe5`) was either removed or set to private. So I could not reproduce the shown example.
I basically reference the same model but from another user, which provides a slightly better result. That's why I also updated the metrics.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24605/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24605/timeline
| null | true |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24605",
"html_url": "https://github.com/huggingface/transformers/pull/24605",
"diff_url": "https://github.com/huggingface/transformers/pull/24605.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24605.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/24604
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24604/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24604/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24604/events
|
https://github.com/huggingface/transformers/issues/24604
| 1,783,136,819 |
I_kwDOCUB6oc5qSIIz
| 24,604 |
Contradictory information in documentation about the ability to push qunatized models to hub
|
{
"login": "amdnsr",
"id": 33130717,
"node_id": "MDQ6VXNlcjMzMTMwNzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/33130717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amdnsr",
"html_url": "https://github.com/amdnsr",
"followers_url": "https://api.github.com/users/amdnsr/followers",
"following_url": "https://api.github.com/users/amdnsr/following{/other_user}",
"gists_url": "https://api.github.com/users/amdnsr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amdnsr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amdnsr/subscriptions",
"organizations_url": "https://api.github.com/users/amdnsr/orgs",
"repos_url": "https://api.github.com/users/amdnsr/repos",
"events_url": "https://api.github.com/users/amdnsr/events{/privacy}",
"received_events_url": "https://api.github.com/users/amdnsr/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @younesbelkada ",
"Hi @amdnsr \r\nThanks for the issue\r\nas explained in the mentioned paragraphs, it is possible to push 8bit quantized weights only if you use the latest transformers + bitsandbytes. However, pushing 4bit weights is currently not supported",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,688 | 1,691 | 1,691 |
NONE
| null |
### System Info
Using Google Colab and the main branch of the transformers library on GitHub.
### Who can help?
@sgugger @stevhliu @MKhalusova
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The note at the end of the section [Load a large model in 4bit](https://huggingface.co/docs/transformers/main/main_classes/quantization#load-a-large-model-in-4bit) and [Load a large model in 8bit
](https://huggingface.co/docs/transformers/main/main_classes/quantization#load-a-large-model-in-8bit) suggests that it's not possibel to push the quantized weights on the hub:
> Note that once a model has been loaded in 4-bit it is currently not possible to push the quantized weights on the Hub.
> Note that once a model has been loaded in 8-bit it is currently not possible to push the quantized weights on the Hub except if you use the latest transformers and bitsandbytes.
But the example in [Push quantized models on the ๐ค Hub
](https://huggingface.co/docs/transformers/main/main_classes/quantization#push-quantized-models-on-the-hub) suggests that it's possible to push quantized models to the hub.
Same is suggested in [Load a quantized model from the ๐ค Hub
](https://huggingface.co/docs/transformers/main/main_classes/quantization#load-a-quantized-model-from-the-hub)
Does it mean that push to hub is only supported for 8-bit quantized models when using the latest transformers and bitsandbytes but NOT for 4-bit models?
Or is it actually possible to push to hub for both 8-bit and 4-bit quantized models?
### Expected behavior
Can 4-bit and 8-bit quantized models be pushed to hub and be loaded from hub?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24604/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24604/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24603
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24603/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24603/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24603/events
|
https://github.com/huggingface/transformers/pull/24603
| 1,783,076,358 |
PR_kwDOCUB6oc5UXCuD
| 24,603 |
Make warning disappear for remote code in pipelines
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,688 | 1,688 | 1,688 |
COLLABORATOR
| null |
# What does this PR do?
Currently, loading a pipeline with a model that has its code on the Hub will result in a warning that the model is not in the right auto class. This PR adds the custom class in the auto mapping so that this warning is not triggered.
Related to #24598
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24603/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24603/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24603",
"html_url": "https://github.com/huggingface/transformers/pull/24603",
"diff_url": "https://github.com/huggingface/transformers/pull/24603.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24603.patch",
"merged_at": 1688511794000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24602
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24602/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24602/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24602/events
|
https://github.com/huggingface/transformers/issues/24602
| 1,783,010,814 |
I_kwDOCUB6oc5qRpX-
| 24,602 |
Support gradient checkpointing for ESM models
|
{
"login": "mahdip72",
"id": 42680708,
"node_id": "MDQ6VXNlcjQyNjgwNzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/42680708?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mahdip72",
"html_url": "https://github.com/mahdip72",
"followers_url": "https://api.github.com/users/mahdip72/followers",
"following_url": "https://api.github.com/users/mahdip72/following{/other_user}",
"gists_url": "https://api.github.com/users/mahdip72/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mahdip72/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mahdip72/subscriptions",
"organizations_url": "https://api.github.com/users/mahdip72/orgs",
"repos_url": "https://api.github.com/users/mahdip72/repos",
"events_url": "https://api.github.com/users/mahdip72/events{/privacy}",
"received_events_url": "https://api.github.com/users/mahdip72/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
},
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] |
closed
| false | null |
[] |
[
"cc @Rocketknight1 ",
"Any updates?",
"It's on the to-do list, but I'm afraid there are competing priorities at the moment!",
"Let's open it up for anyone in the community who might want to tackle it :) ",
"Hi @amyeroberts @Rocketknight1 I would like to work on this ",
"@sanjeevk-os Great! Once you have the code ready, open a PR and ping both @Rocketknight1 and me. Looking forward to reviewing! ",
"Hi @sanjeevk-os, I actually took a look at the ESM code - it actually looks like some of the supports for gradient checkpointing are already there, in which case you just need to make a one-line change to set `supports_gradient_checkpointing = True`",
"Hi @Rocketknight1 Thank you for taking a look. I also noticed that the ESM model has the _create_custom_forward_ passed to torch checkpoint function. I will do some more checks and will raise a PR soon.",
"Hi @sanjeevk-os - we're getting even more requests for this, so we'd like to try to add it soon! If you're having trouble, just let us know. We can take over the PR internally to try to get it through, and we appreciate your effort regardless.",
"This issue has now been resolved - thank you to @sanjeevk-os for the very clean PR!"
] | 1,688 | 1,695 | 1,695 |
NONE
| null |
Would you please add `gradient_checkpointing_enable()` feature for ESM models?
These models currently are the best available pre-trained protein language models for researchers.
Many thanks.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24602/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24602/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24601
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24601/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24601/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24601/events
|
https://github.com/huggingface/transformers/pull/24601
| 1,782,781,471 |
PR_kwDOCUB6oc5UWGFw
| 24,601 |
add link to accelerate doc
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,688 | 1,689 | 1,689 |
MEMBER
| null |
# What does this PR do?
This PR modifies the quantization doc to include a link to accelerate documentation if the user wants to quantize their own pytorch model.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24601/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24601/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24601",
"html_url": "https://github.com/huggingface/transformers/pull/24601",
"diff_url": "https://github.com/huggingface/transformers/pull/24601.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24601.patch",
"merged_at": 1689025771000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24600
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24600/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24600/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24600/events
|
https://github.com/huggingface/transformers/issues/24600
| 1,782,723,317 |
I_kwDOCUB6oc5qQjL1
| 24,600 |
IndexError: index -1 is out of bounds for dimension 1 with size 0
|
{
"login": "diaojunxian",
"id": 19700467,
"node_id": "MDQ6VXNlcjE5NzAwNDY3",
"avatar_url": "https://avatars.githubusercontent.com/u/19700467?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/diaojunxian",
"html_url": "https://github.com/diaojunxian",
"followers_url": "https://api.github.com/users/diaojunxian/followers",
"following_url": "https://api.github.com/users/diaojunxian/following{/other_user}",
"gists_url": "https://api.github.com/users/diaojunxian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/diaojunxian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/diaojunxian/subscriptions",
"organizations_url": "https://api.github.com/users/diaojunxian/orgs",
"repos_url": "https://api.github.com/users/diaojunxian/repos",
"events_url": "https://api.github.com/users/diaojunxian/events{/privacy}",
"received_events_url": "https://api.github.com/users/diaojunxian/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @gante @sanchit-gandhi ",
"Hey @diaojunxian ๐ \r\n\r\nYour reproducer contains private data, which means we can't easily reproduce on our end -- would you be able to share the audio file with us OR rewrite the reproducer from public data?\r\n\r\nAt a first glance, because of the thrown exception (`IndexError: index -1 is out of bounds for dimension 1 with size 0` in `next_token_logits = outputs.logits[:, -1, :]`), I'd bet something went wrong at preprocessing time :D bad model input shapes -> bad model output shapes",
"> Hey @diaojunxian ๐\r\n> \r\n> Your reproducer contains private data, which means we can't easily reproduce on our end -- would you be able to share the audio file with us OR rewrite the reproducer from public data?\r\n> \r\n> At a first glance, because of the thrown exception (`IndexError: index -1 is out of bounds for dimension 1 with size 0` in `next_token_logits = outputs.logits[:, -1, :]`), I'd bet something went wrong at preprocessing time :D bad model input shapes -> bad model output shapes\r\n\r\nI can send it to you privately, but it cannot be published on the Internet. Only you can personally verify this bug. Can you see it?\r\n",
"@diaojunxian yeah, that would be helpful. You can send it to the email attached to my GH account ([[email protected]](mailto:[email protected]))\r\n\r\nYou are using an unmodified `openai/whisper-large-v2`, correct?",
"> start = 23196064\r\n> end = 23364576\r\n\r\nyes, unmodified whisper-large-v2, and had send the audio to the gmail.",
"Hey @diaojunxian ๐ \r\n\r\nIn both snippets, the problem is the same: as soon as the model tries to generate beyond its [maximum length](https://huggingface.co/openai/whisper-large-v2/blob/1f66457e6e36eeb6d89078882a39003e55c330b8/config.json#L42), the output sequence dimension becomes 0, causing the exception.\r\n\r\nI've found the issue and will open a PR to fix it. The second example you provided works perfectly after the fix. The first one probably will fail because of `max_new_tokens=3000` (Whisper's maximum length is 448 and we default generation to its maximum length, you probably shouldn't set `max_new_tokens` at all :) )",
"After the PR linked above gets merged, you can install from `main` and it should work :)"
] | 1,688 | 1,688 | 1,688 |
NONE
| null |
### System Info
PC: M2
transformers== 4.31.0.dev0
refer: https://github.com/openai/whisper/discussions/1478
meet the error:
```
in <module>:9 โ
โ โ
โ 6 prompt_ids = processor.get_prompt_ids(prompt) โ
โ 7 โ
โ 8 forced_decoder_ids = processor.get_decoder_prompt_ids(language="zh", task="transcribe") โ
โ โฑ 9 predicted_ids = model.generate(input_features, prompt_ids=prompt_ids, forced_decoder_ids โ
โ 10 โ โ โ โ โ โ โ max_new_tokens=3000) โ
โ 11 transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True) โ
โ 12 print("่ๆถ:", time.time() - start_time, transcription) โ
โ โ
โ /Users/diaojunxian/anaconda3/envs/3.9/lib/python3.9/site-packages/transformers/models/whisper/mo โ
โ deling_whisper.py:1664 in generate โ
โ โ
โ 1661 โ โ if generation_config.return_timestamps: โ
โ 1662 โ โ โ logits_processor = [WhisperTimeStampLogitsProcessor(generation_config)] โ
โ 1663 โ โ โ
โ โฑ 1664 โ โ return super().generate( โ
โ 1665 โ โ โ inputs, โ
โ 1666 โ โ โ generation_config, โ
โ 1667 โ โ โ logits_processor, โ
โ โ
โ /Users/diaojunxian/anaconda3/envs/3.9/lib/python3.9/site-packages/torch/utils/_contextlib.py:115 โ
โ in decorate_context โ
โ โ
โ 112 โ @functools.wraps(func) โ
โ 113 โ def decorate_context(*args, **kwargs): โ
โ 114 โ โ with ctx_factory(): โ
โ โฑ 115 โ โ โ return func(*args, **kwargs) โ
โ 116 โ โ
โ 117 โ return decorate_context โ
โ 118 โ
โ โ
โ /Users/diaojunxian/anaconda3/envs/3.9/lib/python3.9/site-packages/transformers/generation/utils. โ
โ py:1522 in generate โ
โ โ
โ 1519 โ โ โ โ ) โ
โ 1520 โ โ โ โ
โ 1521 โ โ โ # 11. run greedy search โ
โ โฑ 1522 โ โ โ return self.greedy_search( โ
โ 1523 โ โ โ โ input_ids, โ
โ 1524 โ โ โ โ logits_processor=logits_processor, โ
โ 1525 โ โ โ โ stopping_criteria=stopping_criteria, โ
โ โ
โ /Users/diaojunxian/anaconda3/envs/3.9/lib/python3.9/site-packages/transformers/generation/utils. โ
โ py:2349 in greedy_search โ
โ โ
โ 2346 โ โ โ if synced_gpus and this_peer_finished: โ
โ 2347 โ โ โ โ continue # don't waste resources running the code we don't need โ
โ 2348 โ โ โ โ
โ โฑ 2349 โ โ โ next_token_logits = outputs.logits[:, -1, :] โ
โ 2350 โ โ โ โ
โ 2351 โ โ โ # pre-process distribution โ
โ 2352 โ โ โ next_tokens_scores = logits_processor(input_ids, next_token_logits)
```
use these code all occur error.
```
from transformers import WhisperForConditionalGeneration, WhisperProcessor
import librosa
import soundfile
import torchaudio
base_model = "/Users/ddd/Documents/github/whisper-large-v2"
processor = WhisperProcessor.from_pretrained(base_model,
language="zh",
task="transcribe",
local_files_only="True")
forced_decoder_ids = processor.get_decoder_prompt_ids(language="zh", task="transcribe")
# ่ทๅๆจกๅ
model = WhisperForConditionalGeneration.from_pretrained(base_model,
device_map="auto",
local_files_only=True).half()
model.eval()
audio_file = "/Users/ddd/Documents/gitlab/llm-train/yuyin/simple.m4a"
src_signal, sample_rate = librosa.load(audio_file, sr=16000)
start = 23196064
end = 23364576
src_signal_demo = src_signal[start:end]
input_features = processor(src_signal_demo, sampling_rate=sample_rate, return_tensors="pt").input_features.half().to("mps")
prompt = 'ไปฅไธๆฏๆฎ้่ฏ็ๅฅๅญ'
prompt_ids = processor.get_prompt_ids(prompt)
forced_decoder_ids = processor.get_decoder_prompt_ids(language="zh", task="transcribe")
predicted_ids = model.generate(input_features, prompt_ids=prompt_ids, forced_decoder_ids=forced_decoder_ids,
max_new_tokens=3000)
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
```
```
from transformers import pipeline
pipe = pipeline(
task="automatic-speech-recognition",
model="openai/whisper-large-v2",
device="mps",
chunk_length_s=30, # if not precised then only generate as much as `max_new_tokens`
generate_kwargs = {"num_beams": 5} # same as setting as "openai whisper" default
)
audio_file = "/Users/ddd/Documents/gitlab/llm-train/yuyin/simple.m4a"
src_signal, sample_rate = librosa.load(audio_file, sr=16000)
start = 23196064
end = 23364576
src_signal_demo = src_signal[start:end]
prompt = 'ไปฅไธๆฏๆฎ้่ฏ็ๅฅๅญ'
prompt_ids = pipe.tokenizer.get_prompt_ids(prompt, return_tensors="pt")
result = pipe(src_signal_demo, generate_kwargs={"language": "zh", "task": "transcribe", "prompt_ids": prompt_ids})
print(result["text"])
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
1. load the audio
2. slice the audio
3. add the prompt
4. transcribe the slice audio, then occur error.
### Expected behavior
the audio can transform to the context.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24600/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24600/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24599
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24599/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24599/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24599/events
|
https://github.com/huggingface/transformers/pull/24599
| 1,782,699,187 |
PR_kwDOCUB6oc5UVzx7
| 24,599 |
Use protobuf 4
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"I have checked to make sure `protobuf==4.23.3` is installed and used in the CI.",
"@Narsil mentioned we should keep the old/new version files, and determine which one to use (by the protobuf version numbers?).\r\n\r\nLet me know more details about this please.",
"_The documentation is not available anymore as the PR was closed or merged._",
"Will merge once the CI for protobuf 4 a nd protobuf 3 both green.\r\n\r\nDon't hesitate to leave comments later if any @Narsil ๐ ",
"Great PR. I hope this doesn't have any unforeseen consequence (I don't know what are the breaking changes between protobuf 3 and 4)",
"> I don't know what are the breaking changes between protobuf 3 and 4\r\n\r\nYeah, me neither. I just rely on our CI ๐ค ",
"@ydshieh I'm now getting\r\n\r\n```\r\n File \"/home/fxmarty/hf_internship/transformers/src/transformers/utils/sentencepiece_model_pb2_new.py\", line 16, in <module>\r\n DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(\r\nTypeError: Couldn't build proto file into descriptor pool: Invalid default '0.9995' for field sentencepiece.TrainerSpec.character_coverage of type 2\r\n```\r\n\r\nwith protobuf=4.23.4 & transformers main when doing `from transformers.utils import sentencepiece_model_pb2_new`. Any idea what's wrong? protobuf 3.20.3 works well for me.",
"Hi @fxmarty !\r\n\r\nHmm, I remember I got `4.23.3` when I made this PR. Not sure if it's the reason. Let me check.",
"Hi again\r\n\r\n~~@fxmarty `4.23.4` works for me.~~\r\n\r\n\r\n\r\n\r\n```\r\n(py39) ฮป pip show protobuf\r\nName: protobuf\r\nVersion: 4.23.4\r\n\r\nPython 3.9.13 (main, Oct 13 2022, 21:23:06) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> from transformers.utils import sentencepiece_model_pb2\r\n>>> from transformers.utils import sentencepiece_model_pb2_new\r\n>>>\r\n```",
"Hm, there is something wrong and I am looking deeper.",
"OK, PR #24622 makes this failing.\r\n\r\ncc @ArthurZucker \r\n\r\n",
"Comes from #24690. More details: \r\nThis does not fix the version error, but fixes the issue with 3.20.3, when we cannot use seqio or anything importing protobuf: \r\n```python \r\n! pip install protobuf=3.20.3\r\nfrom transformers import AutoTokenizer\r\nfrom seqio import SentencePieceVocabulary\r\nTypeError: Couldn't build proto file into descriptor pool!\r\nInvalid proto descriptor for file \"sentencepiece_model.proto\":\r\n sentencepiece_model.proto: A file with this name is already in the pool.\r\n```",
"@fxmarty We are not able to reproduce the situation you have. Could you specify your transformers commit that is installed? Thanks.\r\n\r\n(The above discussion is another story)",
"@ydshieh It appears I can't reproduce anymore the issue I had. Must have messed something up in my install that I fixed since then. Maybe the `pip uninstall protobuf && pip install --no-binary protobuf protobuf==3.20.3` that I used in the meanwhile helped to fix things.",
"@ydshieh Oh, actually I have this issue only running in my scripting IDE pyzo, but not in a terminal. The same python env is used so quite weird.",
"@ArthurZucker @ydshieh How does `sentencepiece_model_pb2_new.py` define `TrainerSpec.character_coverage`?\r\n\r\nSpecifically, how is https://github.com/huggingface/transformers/blob/33aafc26ee68df65c7d9457259fc3d59f79eef4f/src/transformers/utils/sentencepiece_model_pb2_new.py#L17 generated? If I use `decode()` on it python complains that `'utf-8' codec can't decode byte 0x80 in position 43`.",
"Hi @fxmarty \r\n\r\nThat file is being generated by protobuf compile. I don't have the courage to read it ...\r\n\r\nWhen I enable the support to use protobuf v4, I ran the whole CI (non-slow), and no failed test.\r\n\r\nCould you show us your usage (related to protobuf) that produce some failures - maybe I can help ?"
] | 1,688 | 1,689 | 1,688 |
COLLABORATOR
| null |
# What does this PR do?
I move forward to generate the new `src/transformers/utils/sentencepiece_model_pb2.py` using the protocol buffer compiler. No test failure but have a quality check fails. I can probably removed the undefined part `_TRAINERSPEC` which doesn't seem used.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24599/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24599/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24599",
"html_url": "https://github.com/huggingface/transformers/pull/24599",
"diff_url": "https://github.com/huggingface/transformers/pull/24599.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24599.patch",
"merged_at": 1688151416000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24598
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24598/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24598/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24598/events
|
https://github.com/huggingface/transformers/issues/24598
| 1,782,578,710 |
I_kwDOCUB6oc5qP_4W
| 24,598 |
Falcon-40b-instruct on Runpod
|
{
"login": "Mrin7",
"id": 134509550,
"node_id": "U_kgDOCARz7g",
"avatar_url": "https://avatars.githubusercontent.com/u/134509550?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mrin7",
"html_url": "https://github.com/Mrin7",
"followers_url": "https://api.github.com/users/Mrin7/followers",
"following_url": "https://api.github.com/users/Mrin7/following{/other_user}",
"gists_url": "https://api.github.com/users/Mrin7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mrin7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mrin7/subscriptions",
"organizations_url": "https://api.github.com/users/Mrin7/orgs",
"repos_url": "https://api.github.com/users/Mrin7/repos",
"events_url": "https://api.github.com/users/Mrin7/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mrin7/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @Mrin7, thanks for raising this issue. \r\n\r\nIndeed, this is arising because of [this check](https://github.com/huggingface/transformers/blob/78a2b19fc84ed55c65f4bf20a901edb7ceb73c5f/src/transformers/pipelines/text_generation.py#L65) in the pipeline code, and the falcon model isn't registered in `MODEL_FOR_CAUSAL_LM_MAPPING`. \r\n\r\nI'm able to get things working if I explicitly add it e.g. \r\n\r\n```python\r\nimport torch\r\n\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM, pipeline\r\nfrom transformers.models.auto.modeling_auto import MODEL_FOR_CAUSAL_LM_MAPPING_NAMES\r\n\r\n# Explicitly add the mapping here\r\nMODEL_FOR_CAUSAL_LM_MAPPING_NAMES[\"RefinedWebModel\"] = \"RWForCausalLM\"\r\n\r\ncheckpoint = \"tiiuae/falcon-40b-instruct\"\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(checkpoint)\r\n\r\ngenerator = pipeline(\r\n \"text-generation\",\r\n model=checkpoint,\r\n tokenizer=tokenizer,\r\n torch_dtype=torch.bfloat16,\r\n trust_remote_code=True,\r\n device_map=\"auto\",\r\n)\r\nsequences = generator(\r\n \"What does a raindrop feel when it hits the sea?:\",\r\n max_length=200,\r\n do_sample=True,\r\n top_k=10,\r\n num_return_sequences=1,\r\n eos_token_id=tokenizer.eos_token_id,\r\n)\r\nfor seq in sequences:\r\n print(f\"Result: {seq['generated_text']}\")\r\n```\r\n\r\n@sgugger For models on the hub, what's the standard way for enabling models to be loaded in a pipeline?\r\n\r\n",
"Loading the model outside of the pipeline, or the workaround you mention. The check should be ignored when `trust_remote_code=True` but that's a bit more work on our side.",
"[amyeroberts](https://github.com/amyeroberts)\r\nHi Amy - I tried to add as suggested by you. \r\n\r\nimport torch\r\nimport transformers\r\nfrom accelerate import init_empty_weights\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM, pipeline\r\nfrom transformers.models.auto.modeling_auto import MODEL_FOR_CAUSAL_LM_MAPPING_NAMES\r\n\r\n# Explicitly add the mapping here\r\nMODEL_FOR_CAUSAL_LM_MAPPING_NAMES[\"RefinedWebModel\"] = \"RWForCausalLM\"\r\n\r\ncheckpoint = \"tiiuae/falcon-7b-instruct\"\r\n\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(checkpoint)\r\n\r\n\r\n\r\n\r\ngenerator = pipeline(\r\n \"text-generation\",\r\n model=checkpoint,\r\n tokenizer=tokenizer,\r\n torch_dtype=torch.bfloat16,\r\n trust_remote_code=True,\r\n #low_cpu_mem_usage=True,\r\n #device_map=\"auto\",\r\n)\r\nsequences = generator(\r\n \"Tell me everything about abortion bans in USA:\",\r\n max_length=200,\r\n do_sample=True,\r\n top_k=10,\r\n num_return_sequences=1,\r\n eos_token_id=tokenizer.eos_token_id,\r\n)\r\nfor seq in sequences:\r\n print(f\"Result: {seq['generated_text']}\")\r\n\r\nTill getting the same error:\r\nThe model 'RWForCausalLM' is not supported for text-generation. Supported models are ['BartForCausalLM', 'BertLMHeadModel', 'BertGenerationDecoder', 'BigBirdForCausalLM', 'BigBirdPegasusForCausalLM', 'BioGptForCausalLM', 'BlenderbotForCausalLM', 'BlenderbotSmallForCausalLM', 'BloomForCausalLM', 'CamembertForCausalLM', 'CodeGenForCausalLM', 'CpmAntForCausalLM', 'CTRLLMHeadModel', 'Data2VecTextForCausalLM', 'ElectraForCausalLM', 'ErnieForCausalLM', 'GitForCausalLM', 'GPT2LMHeadModel', 'GPT2LMHeadModel', 'GPTBigCodeForCausalLM', 'GPTNeoForCausalLM', 'GPTNeoXForCausalLM', 'GPTNeoXJapaneseForCausalLM', 'GPTJForCausalLM', 'LlamaForCausalLM', 'MarianForCausalLM', 'MBartForCausalLM', 'MegaForCausalLM', 'MegatronBertForCausalLM', 'MusicgenForCausalLM', 'MvpForCausalLM', 'OpenLlamaForCausalLM', 'OpenAIGPTLMHeadModel', 'OPTForCausalLM', 'PegasusForCausalLM', 'PLBartForCausalLM', 'ProphetNetForCausalLM', 'QDQBertLMHeadModel', 'ReformerModelWithLMHead', 'RemBertForCausalLM', 'RobertaForCausalLM', 'RobertaPreLayerNormForCausalLM', 'RoCBertForCausalLM', 'RoFormerForCausalLM', 'RwkvForCausalLM', 'Speech2Text2ForCausalLM', 'TransfoXLLMHeadModel', 'TrOCRForCausalLM', 'XGLMForCausalLM', 'XLMWithLMHeadModel', 'XLMProphetNetForCausalLM', 'XLMRobertaForCausalLM', 'XLMRobertaXLForCausalLM', 'XLNetLMHeadModel', 'XmodForCausalLM'].\r\n/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:1264: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation )\r\n warnings.warn(\r\nSetting `pad_token_id` to `eos_token_id`:11 for open-end generation.",
"[sgugger](https://github.com/sgugger) - can you please show me how to load the model outside pipeline?",
"@Mrin7 I'm looking more into this, but this is an error, just a warning. I'll make it disappear but you can already use the pipeline.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,688 | 1,691 | 1,691 |
NONE
| null |
### System Info
2 x A100 80GB
32 vCPU 251 GB RAM
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-40b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"What does a raindrop feel when it hits the sea?:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
### Expected behavior
Expected to Run smoothly, give an output.
Error :
The model 'RWForCausalLM' is not supported for text-generation. Supported models are ['BartForCausalLM', 'BertLMHeadModel', 'BertGenerationDecoder', 'BigBirdForCausalLM', 'BigBirdPegasusForCausalLM', 'BioGptForCausalLM', 'BlenderbotForCausalLM', 'BlenderbotSmallForCausalLM', 'BloomForCausalLM', 'CamembertForCausalLM', 'CodeGenForCausalLM', 'CpmAntForCausalLM', 'CTRLLMHeadModel', 'Data2VecTextForCausalLM', 'ElectraForCausalLM', 'ErnieForCausalLM', 'GitForCausalLM', 'GPT2LMHeadModel', 'GPT2LMHeadModel', 'GPTBigCodeForCausalLM', 'GPTNeoForCausalLM', 'GPTNeoXForCausalLM', 'GPTNeoXJapaneseForCausalLM', 'GPTJForCausalLM', 'LlamaForCausalLM', 'MarianForCausalLM', 'MBartForCausalLM', 'MegaForCausalLM', 'MegatronBertForCausalLM', 'MvpForCausalLM', 'OpenLlamaForCausalLM', 'OpenAIGPTLMHeadModel', 'OPTForCausalLM', 'PegasusForCausalLM', 'PLBartForCausalLM', 'ProphetNetForCausalLM', 'QDQBertLMHeadModel', 'ReformerModelWithLMHead', 'RemBertForCausalLM', 'RobertaForCausalLM', 'RobertaPreLayerNormForCausalLM', 'RoCBertForCausalLM', 'RoFormerForCausalLM', 'RwkvForCausalLM', 'Speech2Text2ForCausalLM', 'TransfoXLLMHeadModel', 'TrOCRForCausalLM', 'XGLMForCausalLM', 'XLMWithLMHeadModel', 'XLMProphetNetForCausalLM', 'XLMRobertaForCausalLM', 'XLMRobertaXLForCausalLM', 'XLNetLMHeadModel', 'XmodForCausalLM'].
/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:1259: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation)
warnings.warn(
Setting pad_token_id to eos_token_id:11 for open-end generation.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24598/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24598/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24597
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24597/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24597/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24597/events
|
https://github.com/huggingface/transformers/issues/24597
| 1,782,559,200 |
I_kwDOCUB6oc5qP7Hg
| 24,597 |
Test with Pydantic V2
|
{
"login": "lig",
"id": 38705,
"node_id": "MDQ6VXNlcjM4NzA1",
"avatar_url": "https://avatars.githubusercontent.com/u/38705?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lig",
"html_url": "https://github.com/lig",
"followers_url": "https://api.github.com/users/lig/followers",
"following_url": "https://api.github.com/users/lig/following{/other_user}",
"gists_url": "https://api.github.com/users/lig/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lig/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lig/subscriptions",
"organizations_url": "https://api.github.com/users/lig/orgs",
"repos_url": "https://api.github.com/users/lig/repos",
"events_url": "https://api.github.com/users/lig/events{/privacy}",
"received_events_url": "https://api.github.com/users/lig/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @lig \r\n\r\nThanks again for the PR #24596. Really appreciated.\r\n\r\nRegarding using Pydantic V2, I am afraid that the involved places are not directly in `transformers` codebase.\r\n\r\nFor example, in \r\n\r\nhttps://github.com/huggingface/transformers/pull/24596#issuecomment-1615176591\r\n\r\nit shows \r\n\r\n```bash\r\n2023-06-30T20:07:31.9883431Z > [19/19] RUN python3 -c \"from deepspeed.launcher.runner import main\":\r\n2023-06-30T20:07:31.9883916Z 1.621 from deepspeed.runtime.zero.config import DeepSpeedZeroConfig\r\n2023-06-30T20:07:31.9884613Z 1.621 File \"/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/zero/config.py\", line 76, in <module>\r\n2023-06-30T20:07:31.9885116Z 1.621 class DeepSpeedZeroConfig(DeepSpeedConfigModel):\r\n2023-06-30T20:07:31.9885814Z 1.621 File \"/usr/local/lib/python3.8/dist-packages/pydantic/_internal/_model_construction.py\", line 171, in __new__\r\n2023-06-30T20:07:31.9886256Z 1.621 set_model_fields(cls, bases, config_wrapper, types_namespace)\r\n2023-06-30T20:07:31.9886812Z 1.621 File \"/usr/local/lib/python3.8/dist-packages/pydantic/_internal/_model_construction.py\", line 361, in set_model_fields\r\n2023-06-30T20:07:31.9887329Z 1.621 fields, class_vars = collect_model_fields(cls, bases, config_wrapper, types_namespace, typevars_map=typevars_map)\r\n2023-06-30T20:07:31.9888039Z 1.621 File \"/usr/local/lib/python3.8/dist-packages/pydantic/_internal/_fields.py\", line 112, in collect_model_fields\r\n2023-06-30T20:07:31.9888950Z 1.621 raise NameError(f'Field \"{ann_name}\" has conflict with protected namespace \"{protected_namespace}\"')\r\n2023-06-30T20:07:31.9889546Z 1.621 NameError: Field \"model_persistence_threshold\" has conflict with protected namespace \"\r\n```\r\nwhich indicates `/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/zero/config.py` using `pydantic`.\r\n\r\nIt's the 3rd party libraries using pydantic have to do something in order to be run with pydantic V2. Right now, `transformers` can only pin v1 and wait.\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,688 | 1,691 | 1,691 |
CONTRIBUTOR
| null |
### Feature request
Pydantic V2 is about to be release. There is Pydantic 2.0b3 pre-release version available already https://pypi.org/project/pydantic/2.0b3/
Please, test transformers with Pydantic V2.
There is a special tool that could help with migrating the code base to Pydantic V2 https://github.com/pydantic/bump-pydantic/
### Motivation
Pydantic V2 is known to break things and it deprecates a lot of things, see https://errors.pydantic.dev/v2.0/migration.
Why upgrading? Pydantic is known to be 5-50x faster than Pydantic V1 according to https://docs.pydantic.dev/latest/blog/pydantic-v2-alpha/. This alone looks really beneficial for `transformers`. Apart from that Pydantic V2 brings a lot of new features, see the link above.
### Your contribution
Please, don't hesitate asking for help in [Pydantic Discussions](https://github.com/pydantic/pydantic/discussions) section and/or [report any issues](https://github.com/pydantic/pydantic/issues) encountered in the process.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24597/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24597/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24596
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24596/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24596/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24596/events
|
https://github.com/huggingface/transformers/pull/24596
| 1,782,531,807 |
PR_kwDOCUB6oc5UVPhU
| 24,596 |
Limit Pydantic to V1 in dependencies
|
{
"login": "lig",
"id": 38705,
"node_id": "MDQ6VXNlcjM4NzA1",
"avatar_url": "https://avatars.githubusercontent.com/u/38705?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lig",
"html_url": "https://github.com/lig",
"followers_url": "https://api.github.com/users/lig/followers",
"following_url": "https://api.github.com/users/lig/following{/other_user}",
"gists_url": "https://api.github.com/users/lig/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lig/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lig/subscriptions",
"organizations_url": "https://api.github.com/users/lig/orgs",
"repos_url": "https://api.github.com/users/lig/repos",
"events_url": "https://api.github.com/users/lig/events{/privacy}",
"received_events_url": "https://api.github.com/users/lig/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hi @lig, thanks for opening this PR! \r\n\r\nCould you provide some more information about the kind of issues / breakages expected? I can only see `pydantic` used in one place in the library [here](https://github.com/huggingface/transformers/blob/78a2b19fc84ed55c65f4bf20a901edb7ceb73c5f/src/transformers/commands/serving.py#L26), so thankfully impact is limited. \r\n\r\nFor the quality checks, you'll need to run `make style` at the top level of the repo and push any changes made. \r\n\r\ncc @ydshieh ",
"This issue about `pydantic` is real. We get errors when trying to build docker image in the push CI triggered by commit [299aafe](https://github.com/huggingface/transformers/commit/299aafe55ff03c565c059682c6fd312e4b89bc2f).\r\n\r\n@lig Thank you for this PR, it helps us a lot before the issue! I also add one more change quickly (for our CI).\r\n\r\n@amyeroberts I am going to merge once @sgugger approves.\r\n\r\n\r\n```bash\r\n2023-06-30T20:07:31.9883431Z > [19/19] RUN python3 -c \"from deepspeed.launcher.runner import main\":\r\n2023-06-30T20:07:31.9883916Z 1.621 from deepspeed.runtime.zero.config import DeepSpeedZeroConfig\r\n2023-06-30T20:07:31.9884613Z 1.621 File \"/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/zero/config.py\", line 76, in <module>\r\n2023-06-30T20:07:31.9885116Z 1.621 class DeepSpeedZeroConfig(DeepSpeedConfigModel):\r\n2023-06-30T20:07:31.9885814Z 1.621 File \"/usr/local/lib/python3.8/dist-packages/pydantic/_internal/_model_construction.py\", line 171, in __new__\r\n2023-06-30T20:07:31.9886256Z 1.621 set_model_fields(cls, bases, config_wrapper, types_namespace)\r\n2023-06-30T20:07:31.9886812Z 1.621 File \"/usr/local/lib/python3.8/dist-packages/pydantic/_internal/_model_construction.py\", line 361, in set_model_fields\r\n2023-06-30T20:07:31.9887329Z 1.621 fields, class_vars = collect_model_fields(cls, bases, config_wrapper, types_namespace, typevars_map=typevars_map)\r\n2023-06-30T20:07:31.9888039Z 1.621 File \"/usr/local/lib/python3.8/dist-packages/pydantic/_internal/_fields.py\", line 112, in collect_model_fields\r\n2023-06-30T20:07:31.9888950Z 1.621 raise NameError(f'Field \"{ann_name}\" has conflict with protected namespace \"{protected_namespace}\"')\r\n2023-06-30T20:07:31.9889546Z 1.621 NameError: Field \"model_persistence_threshold\" has conflict with protected namespace \"model_\"\r\n```",
"_The documentation is not available anymore as the PR was closed or merged._",
"@amyeroberts answering your question. I've had a quick look and I can say that this https://github.com/huggingface/transformers/blob/78a2b19fc84ed55c65f4bf20a901edb7ceb73c5f/src/transformers/commands/serving.py#L73C1-L73C36 will break.\r\n\r\nInstead of\r\n```py\r\n tokens_ids: Optional[List[int]]\r\n```\r\nit should read \r\n```py\r\n tokens_ids: Optional[List[int]] = None\r\n```\r\nThere is no implicit default None in Pydantic V2 here.\r\n\r\nThankfully, `bump-pydantic` helps with that https://github.com/pydantic/bump-pydantic/#bp001-add-default-none-to-optionalt-uniont-none-and-any-fields"
] | 1,688 | 1,688 | 1,688 |
CONTRIBUTOR
| null |
Pydantic is about to release V2 release which will break a lot of things. This change prevents `transformers` to be used with Pydantic V2 to avoid breaking things.
Also, see #24597
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24596/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24596/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24596",
"html_url": "https://github.com/huggingface/transformers/pull/24596",
"diff_url": "https://github.com/huggingface/transformers/pull/24596.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24596.patch",
"merged_at": 1688162644000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24595
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24595/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24595/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24595/events
|
https://github.com/huggingface/transformers/pull/24595
| 1,782,525,663 |
PR_kwDOCUB6oc5UVOLH
| 24,595 |
Speed up TF tests by reducing hidden layer counts
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Would be nice if you can show the timing for one model (before v.s. after) ๐ . Thanks.",
"_The documentation is not available anymore as the PR was closed or merged._",
"@ydshieh testing locally BERT went from 510 seconds -> 220 seconds",
"I don't know if it was @sgugger either - a lot of this code is really old! I see `tf.tuple()` in there, and even I had to look up the TF 1.x docs to remember what that was supposed to do, lol",
"I know he is probably not the one to decide use `5`, but he might know the history :-)"
] | 1,688 | 1,688 | 1,688 |
MEMBER
| null |
A lot of our slow TF tests are caused by TF compilation. TF compilation isn't really affected by layer width at all - the main thing is just the number of operations it has to build a graph for. By reducing the number of hidden layers, compilation gets much faster, (hopefully) without interfering with test coverage at all.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24595/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24595/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24595",
"html_url": "https://github.com/huggingface/transformers/pull/24595",
"diff_url": "https://github.com/huggingface/transformers/pull/24595.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24595.patch",
"merged_at": 1688139034000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24594
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24594/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24594/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24594/events
|
https://github.com/huggingface/transformers/pull/24594
| 1,782,466,270 |
PR_kwDOCUB6oc5UVBJW
| 24,594 |
Fix loading dataset docs link in run_translation.py example
|
{
"login": "SoyGema",
"id": 24204714,
"node_id": "MDQ6VXNlcjI0MjA0NzE0",
"avatar_url": "https://avatars.githubusercontent.com/u/24204714?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SoyGema",
"html_url": "https://github.com/SoyGema",
"followers_url": "https://api.github.com/users/SoyGema/followers",
"following_url": "https://api.github.com/users/SoyGema/following{/other_user}",
"gists_url": "https://api.github.com/users/SoyGema/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SoyGema/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SoyGema/subscriptions",
"organizations_url": "https://api.github.com/users/SoyGema/orgs",
"repos_url": "https://api.github.com/users/SoyGema/repos",
"events_url": "https://api.github.com/users/SoyGema/events{/privacy}",
"received_events_url": "https://api.github.com/users/SoyGema/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the merge @amyeroberts !"
] | 1,688 | 1,688 | 1,688 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes broken link #24579
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@Rocketknight1
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24594/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24594/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24594",
"html_url": "https://github.com/huggingface/transformers/pull/24594",
"diff_url": "https://github.com/huggingface/transformers/pull/24594.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24594.patch",
"merged_at": 1688394082000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24593
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24593/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24593/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24593/events
|
https://github.com/huggingface/transformers/pull/24593
| 1,782,314,696 |
PR_kwDOCUB6oc5UUf47
| 24,593 |
Add forward methods to quantizer that also computes commitment loss
|
{
"login": "hackyon",
"id": 1557853,
"node_id": "MDQ6VXNlcjE1NTc4NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1557853?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hackyon",
"html_url": "https://github.com/hackyon",
"followers_url": "https://api.github.com/users/hackyon/followers",
"following_url": "https://api.github.com/users/hackyon/following{/other_user}",
"gists_url": "https://api.github.com/users/hackyon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hackyon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hackyon/subscriptions",
"organizations_url": "https://api.github.com/users/hackyon/orgs",
"repos_url": "https://api.github.com/users/hackyon/repos",
"events_url": "https://api.github.com/users/hackyon/events{/privacy}",
"received_events_url": "https://api.github.com/users/hackyon/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Much of this was adapted from https://github.com/facebookresearch/encodec/blob/main/encodec/quantization/core_vq.py\r\n\r\nFor reference, a part of the conversation is in one of the specific changes as well:\r\nhttps://github.com/huggingface/transformers/commit/4f697be0b62c4f3b0401ccbd00d1d46aac81906d",
"@ArthurZucker \r\nCreated a PR for this bit first, lemme know what you think. Thanks!",
"cc @sanchit-gandhi ",
"Thanks for the review!\r\n\r\nAs I mentioned in https://github.com/huggingface/transformers/issues/24295#issuecomment-1614100206, I am busy in July so will be slow to iterate on this, but should have more time in August to keep iterating.",
"Awesome @hackyon! I believe the Meta AI authors are planning on releasing more fine-tuning code throughout July, so we'll be in a good position to finish this PR on your return ๐ค",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24593). All of your documentation changes will be reflected on that endpoint.",
"Hey @hackyon - is this something you're till interested in working on? There was some really nice progress to kick-off this PR! Think it would make a nice contribution to have fine-tuning capabilities for the EnCodec model in `transformers`",
"Thanks Sanchit. Sorry I've been a little busy as of late. I'll give it a good look again next week. I should have something out again before Sept 15th.\r\n\r\nIt would be nice to at least get the commitment loss computation in (though I might not have the free time to complete the discriminative loss part).",
"Hey @hackyon - sounds great, thanks for the update! There's pretty complete training code on the encodec/audiocraft repo's now, so we've got plenty of reference for getting the full training code in. Would be cool to have a fine-tuning script eventually for encodec (cc @ylacombe)",
"Please take a look and let me know what you think, thanks.\r\n\r\nIf the overall structure looks good, I'll write some tests and also come up with ways to test that the loss calculation we have here is equivalent to the one from FB audiocraft (feeding both models some inputs and checking that the output losses are equal). \r\n\r\n",
"I also did some extra reading on the FB audiocraft code, and I think the ultimate training code would need to look something like this:\r\nhttps://github.com/facebookresearch/audiocraft/blob/a2b96756956846e194c9255d0cdadc2b47c93f1b/audiocraft/solvers/compression.py#L83\r\n\r\nWith the balancer weights of the losses here:\r\nhttps://github.com/facebookresearch/audiocraft/blob/a2b96756956846e194c9255d0cdadc2b47c93f1b/config/solver/compression/default.yaml#L14\r\n\r\nThe commitment_loss we are calculating in this change is part of the other_losses.\r\n\r\n\r\nWe'll need to add the balancer, the discriminator (adversarial loss), and some other spectrogram losses (aux_losses). There are also some code on the codebook updates that we might need to add as well (as they are updated using some kind of moving average process rather than through the general training step).",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hey @hackyon - if in the future you find the time to continue on this PR and want to jump back into it, myself and Arthur would be more than happy to assist you with the final bits of the integration! You've done a nice job at setting up all the losses, so it's just a case of finalising the API and ensuring backwards compatibility!",
"Thanks @sanchit-gandhi! I'll revisit this again when I find the time. "
] | 1,688 | 1,704 | 1,698 |
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds the code to compute commitment loss for the quantizer. The loss is only computed in the newly added forward() methods.
This is a small part of the bigger #24295
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. #24295
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24593/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24593/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24593",
"html_url": "https://github.com/huggingface/transformers/pull/24593",
"diff_url": "https://github.com/huggingface/transformers/pull/24593.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24593.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/24592
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24592/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24592/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24592/events
|
https://github.com/huggingface/transformers/pull/24592
| 1,782,294,524 |
PR_kwDOCUB6oc5UUbgL
| 24,592 |
Make (TF) CI faster (test only a random subset of model classes)
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Some of the very slow tests (like `test_saved_model_creation_extended` and `test_xla_fit`) only apply to a few models anyway - they're in `test_modeling_tf_core.py`, so they shouldn't have a big effect on the total test runtime. I might have a couple of ideas for speeding up `test_compile_tf_model`, though!",
"> Let's not take a random subset but the first two then. To test the base model and a model with head.\r\n\r\nWould it be ok to take the first one (base model) + a random other one with head?",
"Also, I looked a bit closer at this PR and I'm actually a bit scared of some of the changes - in particular, `test_pt_tf_model_equivalence` is one of the most important tests and picks up lots of implementation problems in TF ports, so I don't want to reduce its coverage!",
"@Rocketknight1 \r\n\r\nBut that test is not changed, i.e. it doesn't use `get_random_model_classes` introduced here. Nothing to fear ๐ ",
"> Would it be ok to take the first one (base model) + a random other one with head?\r\n\r\nI don't like randomness in tests as it makes them flaky. ",
"Well, in this situation, I do prefer to keep a random head model.\r\n\r\n - We are reducing the number of model classes being tested due to the slow runtime. If we keep the fix model classes, we are likely to **miss failures in certain model heads**. (and for the involved tests in this PR, they all pass currently for their all model classes - if not, probably just one or two.)\r\n\r\n - ~~Only slow tests are involved~~ --> **no flakyness shown on CircleCI.**\r\n - Sorry, I am wrong in this. But I can change it to **only for slow tests**.\r\n\r\nWDYT if I make changes only to slow tests?",
"I very much doubt we will have a failure on a model with head and not the others. With the randomness in the test, you won't be able to reproduce easily (and I don't see the test even printing the model class that failed) so I'd keep things reproducible. This is also on TensorFlow which has very low usage, so I don't think it's worth spending too much time over-engineering something.",
"OKOK",
"@Rocketknight1 OK for you?"
] | 1,688 | 1,688 | 1,688 |
COLLABORATOR
| null |
# What does this PR do?
Daily CI is currently running in 22h30m. @Rocketknight1 might have a way to bring it back to 19-20 hours.
For some tests, let's test only a (random) subset of the model classes ๐ .
Here is the timing of some very slow tests currently:
```
398.44s call tests/models/bert/test_modeling_tf_bert.py::TFBertModelTest::test_xla_fit
275.59s call tests/models/bert/test_modeling_tf_bert.py::TFBertModelTest::test_saved_model_creation_extended
217.84s call tests/models/bert/test_modeling_tf_bert.py::TFBertModelTest::test_compile_tf_model
106.25s call tests/models/bert/test_tokenization_bert_tf.py::BertTokenizationTest::test_saved_model
77.69s call tests/models/bert/test_modeling_tf_bert.py::TFBertModelTest::test_onnx_runtime_optimize
```
and
```
352.31s call tests/models/bart/test_modeling_tf_bart.py::TFBartModelTest::test_saved_model_creation_extended
272.56s call tests/models/bart/test_modeling_tf_bart.py::TFBartModelTest::test_compile_tf_model
270.84s call tests/models/bart/test_modeling_tf_bart.py::TFBartModelTest::test_xla_fit
132.59s call tests/models/bart/test_modeling_tf_bart.py::TFBartModelTest::test_onnx_runtime_optimize
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24592/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24592/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24592",
"html_url": "https://github.com/huggingface/transformers/pull/24592",
"diff_url": "https://github.com/huggingface/transformers/pull/24592.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24592.patch",
"merged_at": 1688136894000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24591
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24591/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24591/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24591/events
|
https://github.com/huggingface/transformers/pull/24591
| 1,782,227,232 |
PR_kwDOCUB6oc5UUMuD
| 24,591 |
DeepSpeed/FSDP ckpt saving utils fixes and FSDP training args fixes
|
{
"login": "pacman100",
"id": 13534540,
"node_id": "MDQ6VXNlcjEzNTM0NTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pacman100",
"html_url": "https://github.com/pacman100",
"followers_url": "https://api.github.com/users/pacman100/followers",
"following_url": "https://api.github.com/users/pacman100/following{/other_user}",
"gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pacman100/subscriptions",
"organizations_url": "https://api.github.com/users/pacman100/orgs",
"repos_url": "https://api.github.com/users/pacman100/repos",
"events_url": "https://api.github.com/users/pacman100/events{/privacy}",
"received_events_url": "https://api.github.com/users/pacman100/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,688 | 1,688 | 1,688 |
CONTRIBUTOR
| null |
# What does this PR do?
1. `_save` function saves `tokenizer` and `training_args.bin` in addition to model.
2. This PR rearranges logic for saving model for DS and FSDP such that the above 2 things are also saved in addition to model ckpt.
3. FIxes https://github.com/huggingface/transformers/issues/24641
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24591/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24591/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24591",
"html_url": "https://github.com/huggingface/transformers/pull/24591",
"diff_url": "https://github.com/huggingface/transformers/pull/24591.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24591.patch",
"merged_at": 1688636005000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24590
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24590/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24590/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24590/events
|
https://github.com/huggingface/transformers/pull/24590
| 1,782,197,606 |
PR_kwDOCUB6oc5UUGP_
| 24,590 |
Udate link to RunHouse hardware setup documentation.
|
{
"login": "BioGeek",
"id": 59344,
"node_id": "MDQ6VXNlcjU5MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/59344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BioGeek",
"html_url": "https://github.com/BioGeek",
"followers_url": "https://api.github.com/users/BioGeek/followers",
"following_url": "https://api.github.com/users/BioGeek/following{/other_user}",
"gists_url": "https://api.github.com/users/BioGeek/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BioGeek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BioGeek/subscriptions",
"organizations_url": "https://api.github.com/users/BioGeek/orgs",
"repos_url": "https://api.github.com/users/BioGeek/repos",
"events_url": "https://api.github.com/users/BioGeek/events{/privacy}",
"received_events_url": "https://api.github.com/users/BioGeek/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks!! I'll update the one in Accelerate too."
] | 1,688 | 1,688 | 1,688 |
CONTRIBUTOR
| null |
# What does this PR do?
The [hardware setup](https://runhouse-docs.readthedocs-hosted.com/en/main/rh_primitives/cluster.html#hardware-setup) link gives a 404 error. Replaced with a [link](https://runhouse-docs.readthedocs-hosted.com/en/latest/api/python/cluster.html#hardware-setup) that points to the latest version of the RunHouse documentation.
## Before submitting
- [ X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Documentation: @dongreenberg, @sgugger, @stevhliu and @MKhalusova
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24590/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24590/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24590",
"html_url": "https://github.com/huggingface/transformers/pull/24590",
"diff_url": "https://github.com/huggingface/transformers/pull/24590.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24590.patch",
"merged_at": 1688123518000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24589
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24589/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24589/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24589/events
|
https://github.com/huggingface/transformers/issues/24589
| 1,782,185,014 |
I_kwDOCUB6oc5qOfw2
| 24,589 |
stuck in the evaluation_loop of trainer.py when training
|
{
"login": "clinton81",
"id": 24949604,
"node_id": "MDQ6VXNlcjI0OTQ5NjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/24949604?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clinton81",
"html_url": "https://github.com/clinton81",
"followers_url": "https://api.github.com/users/clinton81/followers",
"following_url": "https://api.github.com/users/clinton81/following{/other_user}",
"gists_url": "https://api.github.com/users/clinton81/gists{/gist_id}",
"starred_url": "https://api.github.com/users/clinton81/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clinton81/subscriptions",
"organizations_url": "https://api.github.com/users/clinton81/orgs",
"repos_url": "https://api.github.com/users/clinton81/repos",
"events_url": "https://api.github.com/users/clinton81/events{/privacy}",
"received_events_url": "https://api.github.com/users/clinton81/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"We can't really help without knowing the code you are running.",
"@sgugger\r\n\r\nI have located the problem statement:\r\n```\r\ntrainer.py\r\ndef _pad_across_processes(self, tensor, pad_index=-100):\r\n ....\r\n # Gather all sizes\r\n size = torch.tensor(tensor.shape, device=tensor.device)[None]\r\n sizes = self._nested_gather(size).cpu()\r\n ....\r\n```\r\nStuck at .cpu()\r\nI disassembled self._nested_gather(size).cpu() and found that it was stuck in .cpu()\r\nIn the above statement, tensor.shape==[1, 4608, 106078], tensor.device == 'cuda:0' .\r\nWhy is it stuck there?",
"You might have tensors of different sizes on your GPUs. This usually causes a hang when gathering.\r\nAgain, it's hard to know for sure when we don't know the code you are running :-)",
"Hi, I met similar problems with the latest Transformers, where training gets stuck in the evaluation_loop of trainer.py. @clinton81 Have you found any solution?",
"Downgrading Transformers to v4.19.4 works (same code, same data, same command).",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,688 | 1,691 | 1,691 |
NONE
| null |
### System Info
- Ubuntu: 20.04.5
- GPU: A800(80G) x 8
- CUDA: 11.7
- NCCL: 2.14.3
- python: 3.9.16
- deepspeed: 0.9.5
- transformers: 4.30.2
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
### Problem Description:
I'm doing a ***LLM pre-train***. The dataset is ready. The class dataset is my own implementation.
The base model if from the configuration file: LLAMA 7B huggingface
And other parameters are:
* batch_size: 2
* gradient_accumulation_steps: 2
* eval_batch_size: 1
* eval_steps: 12
* save_steps: 12
Additionally, ds_config is:
```
"zero_optimization": {
"stage": 1,
"offload_param": {
"device": "cpu"
},
"offload_optimizer": {
"device": "cpu"
},
"allgather_partitions": true,
"allgather_bucket_size": 5e8,
"contiguous_gradients": true,
"reduce_bucket_size": 5e8,
"overlap_comm": true
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": 1e-05,
"betas": "auto",
"eps": 1e-08
}
},
```
### Question:
***When I start training from beginning to step=12, according to my settings, the first evaluate should be triggered. At this time, everything is normal: the train is normal, and the evaluate is triggered.
But evaluate never ends.***
I logged in my dataset class and saw that the first 12 train steps ended normally. And, the first batch val data has been returned from my \_\_getitem\_\_ to the trainer of huggingface, and then it gets stuck. No return, no more information. No re-entry into the val dataset ( i implement it and no next \_\_getitem\_\_ called ).
### Running stack
At this point, my cpu and gpu are fully loaded.
| NVIDIA-SMI 515.86.01 | Driver Version: 515.86.01 | CUDA Version: 11.7 |
|--|--|--|
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. MIG M.|
|---|---|---|
| 0 NVIDIA A800 80G... On N/A 47C P0 76W / 300W | 58951MiB / 81920MiB| 100% Default Disabled |
| 1 NVIDIA A800 80G... On N/A 48C P0 74W / 300W | 59001MiB / 81920MiB| 100% Default Disabled |
| 2 NVIDIA A800 80G... On N/A 47C P0 72W / 300W | 58999MiB / 81920MiB | 100% Default Disabled |
| 3 NVIDIA A800 80G... On N/A 43C P0 69W / 300W | 58953MiB / 81920MiB| 100% Default Disabled |
| 4 NVIDIA A800 80G... On N/A 43C P0 71W / 300W | 58953MiB / 81920MiB | 100% Default Disabled |
| 5 NVIDIA A800 80G... On N/A 44C P0 70W / 300W | 58999MiB / 81920MiB | 100% Default Disabled |
|-------------------------------|----------------------|----------------------|
```
MiB Mem : 257598.2 total, 144592.4 free, 65631.8 used, 47374.0 buff/cache
MiB Swap: 532480.0 total, 531717.9 free, 762.0 used. 144353.6 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2335083 root 20 0 111.9g 12.9g 5.5g R 200.0 5.1 179:15.41 python3.9
2335101 root 20 0 111.7g 13.3g 5.9g R 199.7 5.3 178:57.67 python3.9
2335097 root 20 0 111.7g 13.3g 5.9g R 100.3 5.3 94:33.70 python3.9
2335099 root 20 0 111.7g 13.1g 5.7g R 100.3 5.2 95:05.42 python3.9
2335091 root 20 0 111.7g 13.1g 5.8g R 100.0 5.2 94:48.45 python3.9
2335095 root 20 0 111.7g 13.1g 5.7g R 100.0 5.2 95:00.15 python3.9
2335098 root 20 0 111.6g 13.1g 5.7g R 100.0 5.2 94:45.88 python3.9
2335096 root 20 0 111.7g 13.2g 5.8g R 99.7 5.2 94:39.61 python3.9
```
I figured out a way to print out all stacks of all active thread :
```
Printing stack
-- Thread ID: 140306341001024 ---
File "/data/work/ChatGPT/trlx_rlhf/sft_fullpretrain/./train_gptj_summarize.py", line 281, in <module>
trainer.train()
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 1645, in train
return inner_training_loop(
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 2020, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 2321, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 3053, in evaluate
output = eval_loop(
Printing stack
-- Thread ID: 140281920169792 ---
File "/data/work/ChatGPT/trlx_rlhf/sft_fullpretrain/./train_gptj_summarize.py", line 281, in <module>
trainer.train()
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 1645, in train
return inner_training_loop(
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 2020, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
-- Thread ID: 140275561256704 ---
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 2321, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 3053, in evaluate
output = eval_loop(
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 3266, in evaluation_loop
logits = self._pad_across_processes(logits)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 3410, in _pad_across_processes
sizes = self._nested_gather(size).cpu()
Printing stack
-- Thread ID: 139870071547712 ---
File "/data/work/ChatGPT/trlx_rlhf/sft_fullpretrain/./train_gptj_summarize.py", line 281, in <module>
trainer.train()
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 1645, in train
return inner_training_loop(
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 2020, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 2321, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 3053, in evaluate
output = eval_loop(
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 3266, in evaluation_loop
logits = self._pad_across_processes(logits)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 3410, in _pad_across_processes
sizes = self._nested_gather(size).cpu()
-- Thread ID: 140452146153280 ---
File "/data/work/ChatGPT/trlx_rlhf/sft_fullpretrain/./train_gptj_summarize.py", line 281, in <module>
trainer.train()
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 1645, in train
return inner_training_loop(
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 2020, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 2321, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 3053, in evaluate
output = eval_loop(
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 3266, in evaluation_loop
logits = self._pad_across_processes(logits)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 3410, in _pad_across_processes
sizes = self._nested_gather(size).cpu()
Printing stack
-- Thread ID: 139800833808192 ---
File "/data/work/ChatGPT/trlx_rlhf/sft_fullpretrain/./train_gptj_summarize.py", line 281, in <module>
trainer.train()
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 1645, in train
return inner_training_loop(
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 2020, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 2321, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 3053, in evaluate
output = eval_loop(
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 3266, in evaluation_loop
logits = self._pad_across_processes(logits)
-- Thread ID: 139794038388480 ---
File "/data/work/ChatGPT/trlx_rlhf/sft_fullpretrain/DebugStack.py", line 17, in _ThreadPrintStack
traceback.print_stack(frame)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/threading.py", line 937, in _bootstrap
self._bootstrap_inner()
File "/root/mambaforge/envs/trlx_env/lib/python3.9/threading.py", line 980, in _bootstrap_inner
self.run()
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/tensorboard/summary/writer/event_file_writer.py", line 244, in run
self._run()
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/tensorboard/summary/writer/event_file_writer.py", line 269, in _ru
data = self._queue.get(True, queue_wait_duration)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/queue.py", line 180, in get
self.not_empty.wait(remaining)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/threading.py", line 316, in wait
gotit = waiter.acquire(True, timeout)
-- Thread ID: 139793899173632 ---
File "/root/mambaforge/envs/trlx_env/lib/python3.9/threading.py", line 937, in _bootstrap
self._bootstrap_inner()
File "/root/mambaforge/envs/trlx_env/lib/python3.9/threading.py", line 980, in _bootstrap_inner
self.run()
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/tqdm/_monitor.py", line 60, in run
self.was_killed.wait(self.sleep_interval)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/threading.py", line 581, in wait
signaled = self._cond.wait(timeout)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/threading.py", line 316, in wait
gotit = waiter.acquire(True, timeout)
Printing stack
-- Thread ID: 140320438421312 ---
File "/data/work/ChatGPT/trlx_rlhf/sft_fullpretrain/./train_gptj_summarize.py", line 281, in <module>
trainer.train()
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 1645, in train
return inner_training_loop(
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 2020, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 2321, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 3053, in evaluate
output = eval_loop(
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 3266, in evaluation_loop
logits = self._pad_across_processes(logits)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 3410, in _pad_across_processes
sizes = self._nested_gather(size).cpu()
Printing stack
-- Thread ID: 140180603557696 ---
File "/data/work/ChatGPT/trlx_rlhf/sft_fullpretrain/./train_gptj_summarize.py", line 281, in <module>
trainer.train()
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 1645, in train
return inner_training_loop(
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 2020, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
-- Thread ID: 140174133536512 ---
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 2321, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 3053, in evaluate
output = eval_loop(
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 3266, in evaluation_loop
logits = self._pad_across_processes(logits)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 3410, in _pad_across_processes
sizes = self._nested_gather(size).cpu()
Printing stack
-- Thread ID: 139808714364736 ---
File "/data/work/ChatGPT/trlx_rlhf/sft_fullpretrain/./train_gptj_summarize.py", line 281, in <module>
trainer.train()
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 1645, in train
return inner_training_loop(
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 2020, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 2321, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 3053, in evaluate
output = eval_loop(
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 3266, in evaluation_loop
logits = self._pad_across_processes(logits)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 3410, in _pad_across_processes
sizes = self._nested_gather(size).cpu()
```
### What I have tried
Also, I tried:
1. Change model (change to gpt-j 6b)
2. Change deepspeed stage (1, 2, 3)
3. Change batch_size (1, 2, 4)
None of them work, they are all stuck in the above-mentioned evaluate. I need help.
### Expected behavior
I want evaluate to work properly. Don't get stuck.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24589/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24589/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24588
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24588/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24588/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24588/events
|
https://github.com/huggingface/transformers/pull/24588
| 1,782,108,424 |
PR_kwDOCUB6oc5UTy-J
| 24,588 |
๐ [i18n-KO] Translated`tasks/document_question_answering.md` to Korean
|
{
"login": "jungnerd",
"id": 46880056,
"node_id": "MDQ6VXNlcjQ2ODgwMDU2",
"avatar_url": "https://avatars.githubusercontent.com/u/46880056?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jungnerd",
"html_url": "https://github.com/jungnerd",
"followers_url": "https://api.github.com/users/jungnerd/followers",
"following_url": "https://api.github.com/users/jungnerd/following{/other_user}",
"gists_url": "https://api.github.com/users/jungnerd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jungnerd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jungnerd/subscriptions",
"organizations_url": "https://api.github.com/users/jungnerd/orgs",
"repos_url": "https://api.github.com/users/jungnerd/repos",
"events_url": "https://api.github.com/users/jungnerd/events{/privacy}",
"received_events_url": "https://api.github.com/users/jungnerd/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"May you please review this PR? ๐ค\r\n@sgugger, @ArthurZucker, @eunseojo"
] | 1,688 | 1,690 | 1,689 |
CONTRIBUTOR
| null |
<!-- PR์ ์ ๋ชฉ์ "๐ [i18n-KO] Translated `<your_file>.mdx` to Korean" ์ผ๋ก ๋ถํ๋๋ฆฝ๋๋ค -->
# What does this PR do?
Translated the `tasks/document_question_answering.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
<!-- ๋ฉ์ธ ์ด์์ ๊ธฐ๋ก์ด ๋จ์์! ๊ฐ์ง์ฐ๊ตฌ์ ๋ฆฌํฌ๋ฅผ ์ฌ์ฉํด ์ฐ์ตํ์ค๋๋ ์ ๊ฑฐํด์ฃผ์๋ฉด ๊ฐ์ฌํ๊ฒ ์ต๋๋ค! :smile: -->
## Before reviewing
- [x] Check for missing / redundant translations (๋ฒ์ญ ๋๋ฝ/์ค๋ณต ๊ฒ์ฌ)
- [x] Grammar Check (๋ง์ถค๋ฒ ๊ฒ์ฌ)
- [x] Review or Add new terms to glossary (์ฉ์ด ํ์ธ ๋ฐ ์ถ๊ฐ)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview๋ก ์ ์์๋ ํ์ธ)
## Who can review? (Initial)
<!-- 1. ์ ์ฒดํฌ๊ฐ ๋ชจ๋ ์๋ฃ๋ ๋ค์๋ง ๊ฐ์ง์ฐ๊ตฌ์ ํ์๋ค์๊ฒ ๋ฆฌ๋ทฐ ์์ฒญํ๋ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์! -->
Team PseudoLab, may you please review this PR?
@0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. ๊ฐ์ง์ฐ๊ตฌ์ ํ์๋ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ ๋๋ ํ์๋ง ํ๊น
ํ์ด์ค ์ง์๋ค์๊ฒ ๋ฆฌ๋ทฐ ์์ฒญํ๋ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์! -->
@sgugger, @ArthurZucker, @eunseojo May you please review this PR?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24588/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24588/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24588",
"html_url": "https://github.com/huggingface/transformers/pull/24588",
"diff_url": "https://github.com/huggingface/transformers/pull/24588.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24588.patch",
"merged_at": 1689848377000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24587
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24587/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24587/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24587/events
|
https://github.com/huggingface/transformers/pull/24587
| 1,782,059,524 |
PR_kwDOCUB6oc5UToal
| 24,587 |
Add Llama Flax Implementation
|
{
"login": "vvvm23",
"id": 44398246,
"node_id": "MDQ6VXNlcjQ0Mzk4MjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/44398246?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vvvm23",
"html_url": "https://github.com/vvvm23",
"followers_url": "https://api.github.com/users/vvvm23/followers",
"following_url": "https://api.github.com/users/vvvm23/following{/other_user}",
"gists_url": "https://api.github.com/users/vvvm23/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vvvm23/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vvvm23/subscriptions",
"organizations_url": "https://api.github.com/users/vvvm23/orgs",
"repos_url": "https://api.github.com/users/vvvm23/repos",
"events_url": "https://api.github.com/users/vvvm23/events{/privacy}",
"received_events_url": "https://api.github.com/users/vvvm23/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Very cool @vvvm23! Scanned through the PR and it looks very nice already - happy to do a full review when it's close to completion. Just drop me a line and I'll have a look! ๐ Likewise if you have any questions or queries, I'm on hand to help :)",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24587). All of your documentation changes will be reflected on that endpoint.",
"Hi @vvvm23 and @sanchit-gandhi, do you guys have a timeline for this effort? Asking because I would love to import FlaxLlama from Hugging Face, but if it is going to take a while, I will probably build my own pipeline to import the model.\r\n\r\nNot sure if this helps at all, but [here](https://github.com/young-geng/EasyLM/blob/main/EasyLM/models/llama/llama_model.py) you find an implementation of Llama in Flax (plus some other library-specific methods that you probably won't need).",
"Hi @gianlucadetommaso, I haven't had the time to work on this since this draft PR went live, but I am blocking time out this weekend to continue.",
"Cool to see community interest around running Flax Llama! Feel free to ping me here when you need a review @vvvm23!",
"Thanks @sanchit-gandhi I found a little time to continue today.\r\n\r\nOne issue I am noticing is that the tolerance when comparing the ground truth PyTorch implementation (in `modeling_llama.py`) and my own implementation, is a lot higher than I'd like. For three hidden layers in the decoder stack, I have to raise it to `atol=1e-2, rtol=1e-2`, with one hidden layer being at `atol=1e-3, rtol=1e-3` in order to pass. You can see the scratch test I am using at the bottom of `modeling_flax_llama.py`\r\n\r\nI think some numerical differences are expected, but not sure to what degree. I am also testing with `float32` so that made me even more suspicious. Would you expected the results to be identical? This is my first time porting a PyTorch model to Flax. Thanks~",
"Update: I now have a full model working. I haven't checked if the pretrained weight loading wrappers (provided by the Flax GPTNeo implementation) work yet, but once they are it will be ready for review. I'll simultaneously clean it up and add some missing features whilst it is being reviewed.",
"Hey! Thanks for the progress update here @vvvm23 and great questions regarding numerical equivalence between models.\r\n\r\nGenerally, for any model less than 1B params we should be able to get equivalence to within 1e-5 between Flax and PyTorch. It's quite likely that you won't get this equivalence running the matmuls in bfloat16 on TPU. But you should be able to running the matmuls in float32, see https://github.com/huggingface/transformers/issues/15754 and https://github.com/google/jax/issues/10413#issue-1212211265 for details\r\n\r\nHere's a script that I used previously for checking PT / Flax equivalence for BLOOM: https://github.com/sanchit-gandhi/codesnippets/blob/main/check_flax_bloom_jit_small_testing.ipynb You can ignore the bits about JIT'ing the forward pass for the time being. You can also uncomment the check to run it on CPU to force the highest precision, or use the decorator as provided\r\n\r\nIf we don't get 1e-5 precision, it's usually an indicator that we have a divergence in our model. Here, going through layer-by-layer and checking the hidden-states might be required to pinpoint it",
"Okay, thanks for the guidance and helper scripts ๐ฅ I expected that this lack of precision was not normal ๐
\r\n\r\nI'll get the pretrained wrappers working first and then focus on debugging the numerical divergence.\r\n\r\nI'm aiming for end of this week to fix those numerical issues, but my responsibilities elsewhere are pulling me a lot, so fingers crossed ๐ค ",
"I've begun my hunt for numerical bugs ๐\r\n\r\nThe first I squashed was rather strange. It seems `torch.rsqrt` and `jax.lax.rsqrt` do not match. This is used in the RMSNorm layers. Simple test to reproduce:\r\n```\r\nIn [19]: a = np.asarray(a, dtype=np.float32)\r\n\r\nIn [20]: a\r\nOut[20]:\r\narray([1.16661310, 1.46686172, 0.13794081, 1.22346771, 1.17509305],\r\n dtype=float32)\r\nIn [21]: torch.rsqrt(torch.from_numpy(a))\r\nOut[21]: tensor([0.92584139, 0.82566792, 2.69248700, 0.90407354, 0.92249471])\r\n\r\nIn [22]: jax.lax.rsqrt(a)\r\nOut[22]: Array([0.92584133, 0.82566792, 2.69248700, 0.90407354, 0.92249471], dtype=float32)\r\n\r\nIn [23]: 1 / torch.sqrt(torch.from_numpy(a))\r\nOut[23]: tensor([0.92584139, 0.82566792, 2.69248700, 0.90407354, 0.92249471])\r\n\r\nIn [24]: 1 / jax.numpy.sqrt(a)\r\nOut[24]: Array([0.92584139, 0.82566792, 2.69248700, 0.90407354, 0.92249471], dtype=float32)\r\n```\r\nSo the fix there was just to replace the `jax.lax.rsqrt` calls with `1 / jax.numpy.sqrt(...)`\r\n\r\nModels still mismatches so I'll keep digging.",
"@sanchit-gandhi The model now numerically matches in fp32 on CPU. The issue was my backend has changed from CPU to GPU since fixing the `rsqrt` issue. I don't think we can expect a perfect match on GPU as the two models use fundamentally different backends. If there is anything you know of that could help remedy this, let me know.\r\n\r\nWhat are the next steps to take? I am guessing some model tests, as well as trying it out on a real model checkpoint rather than random weights. However, my dev machine goes OOM when attempting to load the checkpoint on CPU.",
"Hey @vvvm23! Excellent work on pinpointing the difference between torch and jax.lax `rsqrt` and glad to hear we're within numerical precision using fp32 on CPU - we can be pretty confident we have an accurate Flax implantation based on these results. For GPU, there will be differences between PyTorch and JAX. This is expected since JAX fundamentally works differently to PyTorch with how it computes the matmuls, and is OK since the JAX model will typically generate predictions that are 'as good' as the PyTorch one.\r\n\r\nAdding some tests and updating the docs would be the most sensible next steps! Again, you can refer to the Flax GPT Neo model to see the relevant tests to add: https://github.com/huggingface/transformers/blob/main/tests/models/gpt_neo/test_modeling_flax_gpt_neo.py\r\n\r\n> However, my dev machine goes OOM when attempting to load the checkpoint on CPU.\r\n\r\nThat's interesting - are we loading the weights on GPU by accident? There shouldn't be any GPU OOM if running on CPU. We might see our RAM get full if loading extremely large weights, but the GPU memory shouldn't be affected. What model size are you loading? We can try the smallest 7b checkpoint: https://huggingface.co/meta-llama/Llama-2-7b",
"Awesome thanks, tests and docs it is! I am currently on leave so won't be progressing on this until the 31st.\r\n\r\n> That's interesting - are we loading the weights on GPU by accident?\r\n \r\nActually, in the end no. By OOM on my dev machine, I meant out of CPU memory. Switching to a GPU backend meant I could load the model without running out of memory. So, nothing to worry about ๐
",
"Awesome - thanks for the update @vvvm23. Looking forward to doing a full review of the PR on your return! ",
"How's it looking @vvvm23? Let me know if I can help in anyway! Otherwise feel free to ping me here as soon as we're ready for a review, very excited to add this Flax model for the community!",
"Hi, currently been pretty split responsibility wise (moving house and job !!) so have only made a small bit of progress.\r\n\r\nMost of the tests pass, however, there seems to be some matmul shape mismatch in the `generate_*` tests. Guessing I didn't implement the KV cache correctly, so I'll need to look at that. I also added back some missing docstrings.\r\n\r\nI'll have some time to work on this Thursday, Friday (10th and 11th) but then probably nothing for another week :exploding_head: If you are in a rush and fancy trying to get the remaining tests to pass, please try! Sorry for the slowness on my part also! ",
"The final tests ended up being easy to fix: I had simply forgotten to swap the attention mask and position ids in the pretrained model wrapper.\r\n\r\n@sanchit-gandhi I haven't retested the slow tests locally (as my laptop is slow) but later today I can run them, then tidy the code a bit. If all goes well, should be good for review later today or early tomorrow ๐ ",
"@sanchit-gandhi all tests pass locally :tada: And I've also ran the model using the `generate` API to see if the outputs make sense:\r\n```\r\nIn [23]: inputs = tokenizer('Aloha, World!', return_tensors='np')\r\n\r\nIn [24]: tokenizer.decode(model.generate(**inputs, generation_config=model.generation_config, max_length=100).sequences[0])\r\nOut[24]: '<s> Aloha, World!\\nIโm back from my trip to Hawaii and Iโm feeling great! Iโm still trying to get back into the swing of things, but Iโm getting there. Iโm going to be posting a lot of pictures from my trip, so stay tuned!\\nIโm also going to be posting a lot of pictures from my trip to Hawaii, so stay tuned!\\nIโm also going to be posting a lot of pictures'\r\n```\r\nSeems good to me!\r\n\r\n---\r\n\r\nI think this is ready for review. I would like to draw your attention to a few points I was unsure about:\r\n\r\nFirstly, the model currently throws a warning when loading pretrained checkpoints:\r\n```\r\nSome weights of the model checkpoint at openlm-research/open_llama_3b_v2 were not used when initializing FlaxLlamaForCausalLM: \r\n{('model', 'layers', '9', 'self_attn', 'rotary_emb', 'inv_freq'), ('model', 'layers', '1', 'self_attn', 'rotary_emb', 'inv_freq'), ('model', 'layers', '24', 'self_attn', 'rotary_emb', 'inv_freq'), ('model', 'layers', '11', 'self_attn', 'rotary_emb', 'inv_freq'), ('model', 'layers', '7', 'self_attn', 'rotary_emb', 'inv_freq'), ('model', 'layers', '23', 'self_attn', 'rotary_emb', 'inv_freq'), ('model', 'layers', '13', 'self_attn', 'rotary_emb', 'inv_freq'), ('model', 'layers', '5', 'self_attn', 'rotary_emb', 'inv_freq'), ('model', 'layers', '6', 'self_attn', 'rotary_emb', 'inv_freq'), ('model', 'layers', '20', 'self_attn', 'rotary_emb', 'inv_freq'), ('model', 'layers', '21', 'self_attn', 'rotary_emb', 'inv_freq'), ('model', 'layers', '16', 'self_attn', 'rotary_emb', 'inv_freq'), ('model', 'layers', '10', 'self_attn', 'rotary_emb', 'inv_freq'), ('model', 'layers', '4', 'self_attn', 'rotary_emb', 'inv_freq'), ('model', 'layers', '0', 'self_attn', 'rotary_emb', 'inv_freq'), ('model', 'layers', '25', 'self_attn', 'rotary_emb', 'inv_freq'), ('model', 'layers', '12', 'self_attn', 'rotary_emb', 'inv_freq'), ('model', 'layers', '3', 'self_attn', 'rotary_emb', 'inv_freq'), ('model', 'layers', '19', 'self_attn', 'rotary_emb', 'inv_freq'), ('model', 'layers', '14', 'self_attn', 'rotary_emb', 'inv_freq'), ('model', 'layers', '18', 'self_attn', 'rotary_emb', 'inv_freq'), ('model', 'layers', '22', 'self_attn', 'rotary_emb', 'inv_freq'), ('model', 'layers', '8', 'self_attn', 'rotary_emb', 'inv_freq'), ('model', 'layers', '15', 'self_attn', 'rotary_emb', 'inv_freq'), ('model', 'layers', '17', 'self_attn', 'rotary_emb', 'inv_freq'), ('model', 'layers', '2', 'self_attn', 'rotary_emb', 'inv_freq')}\r\n```\r\nThis has no effect on the outputs, just simply the Flax version of the model does not store the `inv_freq` tensor for rotary embeddings within the state dictionary, so these just get discarded. Is there a way to suppress this warning so not to scare any users?\r\n\r\nSecondly, please double check the licensing. I just copied this from the PyTorch version of Llama and updated the year.\r\n\r\nThird, I use the checkpoint `openlm-research/open_llama_3b_v2` as it was the smallest, fully open Llama checkpoint I could find. The 'official' Llama checkpoints have gated access, so I am unsure if they are appropriate for testing / documentation purposes. This also means I haven't been able to test the model with the official Llama checkpoint as I still haven't managed to get permission from Meta :cry: \r\n\r\nFourth, as we discussed a lot of the code is copied from the Flax implementation of GPT-Neo. There may be some leftover parts from there that we don't need in Llama, and I may have missed some best practices for Llama as GPT-Neo is (relatively) old now. In particular, see the following code block in `FlaxLlamaPreTrainedModel.__call__`:\r\n\r\n```\r\n # TODO: can this handle input tensors being passed as kwargs? I copied GPT-Neo directly here\r\n outputs = self.module.apply(\r\n inputs,\r\n jnp.array(input_ids, dtype=\"i4\"),\r\n jnp.array(attention_mask, dtype=\"i4\"),\r\n jnp.array(position_ids, dtype=\"i4\"),\r\n not train,\r\n False,\r\n output_attentions,\r\n output_hidden_states,\r\n return_dict,\r\n rngs=rngs,\r\n mutable=mutable,\r\n )\r\n```\r\n\r\nFinally, the tests pass but please check if they have sufficient coverage :hugs: \r\n\r\n---\r\n\r\nGenerally speaking, I have a lot of experience writing model code but no experience making large contributions to the Huggingface ecosystem, so there is almost certainly a lot wrong! Apologies in advance and I will do my best to help you bring this model to the finish line :muscle: Thanks for your work so far!",
"Thanks for your additional comments, I have some time to work on the more involved points today ๐ค ",
"@sanchit-gandhi I think everything except the missing weight issue is resolved now (see my comment).\r\n\r\nTrying to resolve some remaining CI issues, I noticed that the line `# Copied from transformers.models.gpt_neo.modeling_flax_gpt_neo.FlaxGPTNeoPreTrainedModel with GPTNeo->Llama` will change the line `@add_start_docstrings_to_model_forward(GPT_NEO_INPUTS_DOCSTRING)` , and overwrite `LLAMA_INPUTS_DOCSTRNG`. Any idea how to stop this happening? Otherwise the CI won't pass :thinking: ",
"That's correct behaviour @vvvm23! What we need to do is create the variable `LLAMA_INPUTS_DOCSTRNG` in the modelling file that contains the necessary docstring info for LLAMA (more or less copied one-for-one from Flax GPT Neo, but adapted for any different inputs)",
"Yeah, it is correct behaviour - what I meant though is that I did have a `LLAMA_INPUTS_DOCSTRING` in a previous commit, but running `make fix-copies` overwrote this docstring with the GPT-Neo version (as you suggested we add that at a class level). I guess my question was, how can we copy everything else in the class but somehow exclude the docstring line?\r\n\r\nI get that we need the docstring itself, just currently the CI won't pass with both that docstring and the `# Copied from transformers.models.gpt_neo.modeling_flax_gpt_neo.FlaxGPTNeoPreTrainedModel with GPTNeo->Llama` line. Does the issue make sense?",
"@sanchit-gandhi fixed the CI issue (I think) by just adding more `Copied from ...` comments and deleting the class level comment. I also fixed the merge conflict. We should be good to go once CI passes I think ๐ ",
"@sanchit-gandhi the CI still fails, this is for two reasons. Could you assist me with resolving this?\r\n\r\n1. The documentation test fails as it tries to load the checkpoint `_CHECKPOINT_FOR_DOC` however it needs `from_pt=True` to be set.\r\n2. Flax Llama is based off the GPT Neo implementation. GPT Neo uses special tests to test equivalence between the flax and pytorch implementations. This overrides the common `test_equivalence_pt_to_flax` test. I copy these special tests (to make my flax tests pass). However, changing the tests for the flax version will cause the pytorch version to fail as it is using the flax version incorrectly.\r\n\r\n> edit: for the second, please see `test_modeling_llama.py:308`. These tests need to be overriden somehow, for now I just return directly to get the CI to pass.\r\n\r\nAll the tests pass and the model is pretty much ready. Just not sure how to get through these last two blockers. Help would be much appreciated!",
"Awesome - thanks for the progress updates here @vvvm23. Agree with you that this is nearly in a position for a final review!\r\n\r\nTo fix your problems:\r\n1. Could you open a pull request on the HF Hub to add the Flax model weights to the checkpoint? See this page for opening a PR: https://huggingface.co/openlm-research/open_llama_3b_v2/discussions (just click 'new pull request' and open a PR). You can ping me here once you've done this and I'll merge the Hub PR! Once this is done, we won't need the `from_pt` argument\r\n2. Does the Flax-PyTorch test `test_equivalence_flax_to_pt` fail if we don't override it? I think it should be possible to use the model without this hack? Shall we just construct sensible attention masks in the model tester? I think it's the random attention mask that might be causing the problem: https://github.com/huggingface/transformers/blob/1fa2d89a9bb98a15e9720190e07d272a42f03d28/tests/models/llama/test_modeling_llama.py#L99\r\n\r\nIn practice, we would never have a 'random' attention mask like this. It would contain ones up to a certain point, then zeros after that. Could you try using a more realistic attention mask like this and seeing if that fixes the test?",
"> Could you open a pull request on the HF Hub to add the Flax model weights to the checkpoint?\r\n\r\nPR is open ๐ Git LFS is a great thing.\r\n\r\n> You can ping me here once you've done this and I'll merge the Hub PR!\r\n\r\n@sanchit-gandhi \r\n\r\n> Shall we just construct sensible attention masks in the model tester? I think it's the random attention mask that might be causing the problem\r\n\r\nYou are a genius! ๐
That was exactly the issue. I changed it to a fixed lower triangular tensor of ones (to test different lengths), padded with ones (to match size of the random one). This should give good enough coverage ๐ \r\n\r\nThe only test that fails now is the PR documentation one, which should be fixed once the checkpoint PR is complete.",
"That's awesome - happy to hear that fixed the issue! Let's wait until one of the OpenLM authors sees the open PR to add the flax weights: https://huggingface.co/openlm-research/open_llama_3b_v2/discussions/4#64f5b59fbce84cd8b1c80f38",
"Hey @sanchit-gandhi, thanks for the second review~\r\n\r\nI can try with Llama 2 weights, however last time I requested access I never got a response. I will try again when I get the chance.\r\n\r\nJust a heads up, I have basically no time to work on this until the start of next week. I am imminently moving which is basically sucking up all my time currently ๐ I am also without GPU for a bit. I ask for your continued patience ๐ or if you have the time to perhaps tackle a couple of the smaller changes yourself? Thanks for your understanding~",
"Hey @vvvm23, thanks for the update! Cool - let's try again with LLaMA v2 weights. Otherwise the Open Assistant ones are good โ
(as you have currently)\r\n\r\nBest of luck with the move, I hope it goes well! No worries about the delay - I'll be on hand when you return and can help with any issues that we're still facing! Unfortunately, I'm also a bit tied up right now, so can't complete this myself. But from my last review, it looks like you're in a strong position to finish the PR with some small final changes!",
"Hey @sanchit-gandhi, thanks for bearing with me. I have addressed your comments. They were quite small but I didn't have the headspace to think about this earlier ๐
\r\n\r\nRegarding Llama 2 weights, I got the Meta approval, so I can test with the Llama 2 weights. Unfortunately, I am still without a GPU workstation and will likely OOM on CPU. So if you want results there you will have to bear with me more โ but we should be good to go as far as a \"Llama 1\" implementation goes.\r\n\r\nedit: also rebased the branch with `main` as we were pretty out of date ๐
"
] | 1,688 | 1,701 | 1,701 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
Fixes #26809. This is a work-in-progress port of Llama to Flax, leaving it as a draft PR for now.
The implementation is based heavily off the GPT-Neo and GPT-J Flax implementations.
Currently, the submodules are ready, I just need to assemble into a full model, check weight loading, add tests, and update the documentation.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. [mentioned in this issue comment](https://github.com/huggingface/transformers/issues/22647#issuecomment-1579154174)
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sanchit-gandhi
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24587/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24587/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24587",
"html_url": "https://github.com/huggingface/transformers/pull/24587",
"diff_url": "https://github.com/huggingface/transformers/pull/24587.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24587.patch",
"merged_at": 1701929100000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24586
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24586/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24586/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24586/events
|
https://github.com/huggingface/transformers/issues/24586
| 1,782,003,460 |
I_kwDOCUB6oc5qNzcE
| 24,586 |
tokenizer = AutoTokenizer.from_pretrained('distilroberta-base') report error
|
{
"login": "monet-joe",
"id": 20459298,
"node_id": "MDQ6VXNlcjIwNDU5Mjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/20459298?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/monet-joe",
"html_url": "https://github.com/monet-joe",
"followers_url": "https://api.github.com/users/monet-joe/followers",
"following_url": "https://api.github.com/users/monet-joe/following{/other_user}",
"gists_url": "https://api.github.com/users/monet-joe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/monet-joe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/monet-joe/subscriptions",
"organizations_url": "https://api.github.com/users/monet-joe/orgs",
"repos_url": "https://api.github.com/users/monet-joe/repos",
"events_url": "https://api.github.com/users/monet-joe/events{/privacy}",
"received_events_url": "https://api.github.com/users/monet-joe/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Please follow the template when reporting issue: share your system info with us. You can run the command `transformers-cli env` and copy-paste its output below.",
"However, I have tried `tokenizer = AutoTokenizer.from_pretrained('distilroberta-base')` and it works perfectly (for me).\r\n\r\nCould you also check with other model checkpoints on the Hub?",
"I have tried it and it also works for me just fine",
"> Please follow the template when reporting issue: share your system info with us. You can run the command `transformers-cli env` and copy-paste its output below.\r\n\r\nTraceback (most recent call last):\r\n File \"***\\.conda\\envs\\tfui\\lib\\runpy.py\", line 194, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"***\\.conda\\envs\\tfui\\lib\\runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"***\\.conda\\envs\\tfui\\Scripts\\transformers-cli.exe\\__main__.py\", line 4, in <module>\r\n File \"***\\.conda\\envs\\tfui\\lib\\site-packages\\transformers\\commands\\transformers_cli.py\", line 26, in <module> \r\n from .user import UserCommands\r\n File \"***\\.conda\\envs\\tfui\\lib\\site-packages\\transformers\\commands\\user.py\", line 20, in <module>\r\n from huggingface_hub.hf_api import HfFolder, create_repo, login, logout, whoami\r\nImportError: cannot import name 'login' from 'huggingface_hub.hf_api' (***\\.conda\\envs\\tfui\\lib\\site-packages\\huggingface_hub\\hf_api.py)",
"ValueError\r\nConnection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.\r\n\r\nI guess it is mostly because of the GFW",
"Maybe first try to upgrade `huggingface_hub` version",
"> Maybe first try to upgrade `huggingface_hub` version\r\n\r\nOSError\r\nWe couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like distilroberta-base is not the path to a directory containing a file named config.json.\r\nCheckout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.\r\nhuggingface_hub.utils._errors.LocalEntryNotFoundError: Connection error, and we cannot find the requested files in the disk cache. Please try again or make sure your Internet connection is on.\r\n\r\nI have upgraded it to 0.15.1, but it still has error",
"Could you check if this is only for `distilroberta-base` or same error for other checkpoint like `gpt2` or `bert-base-uncased`.\r\n\r\nCould you re-run `transformers-cli env`?",
"> Could you check if this is only for `distilroberta-base` or same error for other checkpoint like `gpt2` or `bert-base-uncased`.\r\n> \r\n> Could you re-run `transformers-cli env`?\r\n\r\nOSError\r\nWe couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like gpt2 is not the path to a directory containing a file named config.json.\r\nCheckout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.\r\nhuggingface_hub.utils._errors.LocalEntryNotFoundError: Connection error, and we cannot find the requested files in the disk cache. Please try again or make sure your Internet connection is on.\r\n\r\n//=================================================\r\n\r\nCopy-and-paste the text below in your GitHub issue and FILL OUT the two last points.\r\n\r\n- `transformers` version: 4.30.1\r\n- Platform: Windows-10-10.0.19045-SP0\r\n- Python version: 3.8.16\r\n- Huggingface_hub version: 0.15.1\r\n- Safetensors version: 0.3.1 but is ignored because of PyTorch version too old.\r\n- PyTorch version (GPU?): 1.9.1+cu111 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>",
"Can't really know what happens in your environment. Probably try with a clean (new) virtual python environment, and install `transformers` as `pip install transformers[dev]`. If still not working, there is nothing we can't help: it's likely your env. connection issue.",
"I created a totally new env and installed transformers 4.30.2 (latest ver). When I run the code below:\r\ntokenizer = AutoTokenizer.from_pretrained(xxx)\r\nit returns following error:\r\n\r\nFailed to import transformers.convert_graph_to_onnx because of the following error (look up to see its traceback):\r\nDLL load failed while importing _imaging:",
"For Chinese people, it will happen if you don't equipe yourself with a ladder to cross the GFW."
] | 1,688 | 1,700 | 1,688 |
NONE
| null |
### System Info
OSError
Can't load tokenizer for 'distilroberta-base'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'distilroberta-base' is the correct path to a directory containing all relevant files for a RobertaTokenizerFast tokenizer.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run a Python script with the following code:
tokenizer = AutoTokenizer.from_pretrained('distilroberta-base')
when the program runs at that line, it will report an OS ERR:
OSError
Can't load tokenizer for 'distilroberta-base'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'distilroberta-base' is the correct path to a directory containing all relevant files for a RobertaTokenizerFast tokenizer.
actually the model 'distilroberta-base' is an official model instead of a model from a hf user, its link is 'https://huggingface.co/distilroberta-base' without 'models' path
how could us deal with this problem?
### Expected behavior
let the program could run pass the following code correctly:
tokenizer = AutoTokenizer.from_pretrained('distilroberta-base')
better works with a cache_dir as follow:
tokenizer = AutoTokenizer.from_pretrained('distilroberta-base', cache_dir='./')
the model could auto downloaded into my project path without any error.
it is really inconvenient for us to use, if we use default download mode(from_pretrained without args), it will download into C disk on windows,
if we assign a cache_dir, it will also report error. Even if we try to override the from_pretrained func to use tqdm to download the model into assigned path, it will still report error at request line like: ('Connection aborted.', ConnectionResetError(10054,
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24586/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24586/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24585
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24585/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24585/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24585/events
|
https://github.com/huggingface/transformers/pull/24585
| 1,781,859,594 |
PR_kwDOCUB6oc5US-ab
| 24,585 |
[several models] improve readability
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> size ([int] โ a sequence of integers **defining the shape** of the output tensor.\r\n\r\nIt's actually mentioned, but I agree it's a bit less explicit than `torch.tensor(1)`.",
"Gentlemen, sleeping more on it, actually it just came to me that the cleanest most readable solution is just:\r\n\r\n```\r\nself.logit_scale = nn.Parameter(torch.tensor(self.config.logit_scale_init_value))\r\n```\r\n\r\n`torch.ones` isn't even needed ;)\r\n\r\nDo you agree?\r\n\r\nand yes, I can then fix other places.",
"ok, I have used:\r\n\r\n```\r\nself.logit_scale = nn.Parameter(torch.tensor(self.config.logit_scale_init_value))\r\n```\r\npattern everywhere I found `torch.ones([])`, plus in a few places I used `from_numpy` where the input was numpy, so I touched on other models as well.\r\n\r\nPlease have a look.\r\n",
"`from_numpy` didn't quite work in 2 models where I tried it, so going to use the normal `torch.tensor` constructor like everywhere else in this PR.",
"Nice finding :-)"
] | 1,688 | 1,688 | 1,688 |
CONTRIBUTOR
| null |
Honestly I had no idea what `torch.ones([]) * self.config.logit_scale_init_value` would return - it's not documented either.
~Proposing to change it to a very clear `torch.tensor(1.0)` which leaves no doubt to what it does.~
Proposing to change it to a very clear `torch.tensor(self.config.logit_scale_init_value)` which leaves no doubt to what it does.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24585/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24585/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24585",
"html_url": "https://github.com/huggingface/transformers/pull/24585",
"diff_url": "https://github.com/huggingface/transformers/pull/24585.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24585.patch",
"merged_at": 1688149647000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24584
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24584/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24584/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24584/events
|
https://github.com/huggingface/transformers/issues/24584
| 1,781,737,196 |
I_kwDOCUB6oc5qMybs
| 24,584 |
fsdp support bool type in trainArgs, use len(args.fsdp) would evoke TypeError when set fsdp=True
|
{
"login": "duanzhenyu001",
"id": 103398099,
"node_id": "U_kgDOBim60w",
"avatar_url": "https://avatars.githubusercontent.com/u/103398099?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/duanzhenyu001",
"html_url": "https://github.com/duanzhenyu001",
"followers_url": "https://api.github.com/users/duanzhenyu001/followers",
"following_url": "https://api.github.com/users/duanzhenyu001/following{/other_user}",
"gists_url": "https://api.github.com/users/duanzhenyu001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/duanzhenyu001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/duanzhenyu001/subscriptions",
"organizations_url": "https://api.github.com/users/duanzhenyu001/orgs",
"repos_url": "https://api.github.com/users/duanzhenyu001/repos",
"events_url": "https://api.github.com/users/duanzhenyu001/events{/privacy}",
"received_events_url": "https://api.github.com/users/duanzhenyu001/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"No, because it has been converted [here](https://github.com/huggingface/transformers/blob/2dc5e1a120176594ed2dcb7d2f02a5dd62266232/src/transformers/training_args.py#L1461).\r\n\r\nPlease do not open issue without a code reproducer of the problem.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,688 | 1,691 | 1,691 |
NONE
| null |
https://github.com/huggingface/transformers/blob/2dc5e1a120176594ed2dcb7d2f02a5dd62266232/src/transformers/trainer.py#L433
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24584/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24584/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24583
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24583/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24583/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24583/events
|
https://github.com/huggingface/transformers/pull/24583
| 1,781,243,445 |
PR_kwDOCUB6oc5UQ5Ga
| 24,583 |
use tokenizer model max length
|
{
"login": "rui-ren",
"id": 15321482,
"node_id": "MDQ6VXNlcjE1MzIxNDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/15321482?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rui-ren",
"html_url": "https://github.com/rui-ren",
"followers_url": "https://api.github.com/users/rui-ren/followers",
"following_url": "https://api.github.com/users/rui-ren/following{/other_user}",
"gists_url": "https://api.github.com/users/rui-ren/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rui-ren/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rui-ren/subscriptions",
"organizations_url": "https://api.github.com/users/rui-ren/orgs",
"repos_url": "https://api.github.com/users/rui-ren/repos",
"events_url": "https://api.github.com/users/rui-ren/events{/privacy}",
"received_events_url": "https://api.github.com/users/rui-ren/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,688 | 1,688 | 1,688 |
CONTRIBUTOR
| null |
# What does this PR do?
This PR change the `block_size` to the `tokenizer.model_max_length`, instead of using the `default 1024`.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24583/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24583/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24583",
"html_url": "https://github.com/huggingface/transformers/pull/24583",
"diff_url": "https://github.com/huggingface/transformers/pull/24583.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24583.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/24582
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24582/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24582/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24582/events
|
https://github.com/huggingface/transformers/pull/24582
| 1,781,182,451 |
PR_kwDOCUB6oc5UQr0U
| 24,582 |
Fix annotations
|
{
"login": "tony9402",
"id": 30228292,
"node_id": "MDQ6VXNlcjMwMjI4Mjky",
"avatar_url": "https://avatars.githubusercontent.com/u/30228292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tony9402",
"html_url": "https://github.com/tony9402",
"followers_url": "https://api.github.com/users/tony9402/followers",
"following_url": "https://api.github.com/users/tony9402/following{/other_user}",
"gists_url": "https://api.github.com/users/tony9402/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tony9402/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tony9402/subscriptions",
"organizations_url": "https://api.github.com/users/tony9402/orgs",
"repos_url": "https://api.github.com/users/tony9402/repos",
"events_url": "https://api.github.com/users/tony9402/events{/privacy}",
"received_events_url": "https://api.github.com/users/tony9402/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,688 | 1,688 | 1,688 |
CONTRIBUTOR
| null |
# What does this PR do?
This PR is minor. Just fixed wrong annotations.
All models were tested and confirmed based on Pytorch model.
I think all the codes I modified are made based on the template.
Please check if it is okay to modify the annotation in the template code.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24582/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24582/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24582",
"html_url": "https://github.com/huggingface/transformers/pull/24582",
"diff_url": "https://github.com/huggingface/transformers/pull/24582.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24582.patch",
"merged_at": 1688062655000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24581
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24581/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24581/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24581/events
|
https://github.com/huggingface/transformers/issues/24581
| 1,781,045,478 |
I_kwDOCUB6oc5qKJjm
| 24,581 |
_keys_to_ignore_on_load_unexpected not working with GPT2 model
|
{
"login": "alex-dima",
"id": 55513673,
"node_id": "MDQ6VXNlcjU1NTEzNjcz",
"avatar_url": "https://avatars.githubusercontent.com/u/55513673?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alex-dima",
"html_url": "https://github.com/alex-dima",
"followers_url": "https://api.github.com/users/alex-dima/followers",
"following_url": "https://api.github.com/users/alex-dima/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-dima/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alex-dima/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-dima/subscriptions",
"organizations_url": "https://api.github.com/users/alex-dima/orgs",
"repos_url": "https://api.github.com/users/alex-dima/repos",
"events_url": "https://api.github.com/users/alex-dima/events{/privacy}",
"received_events_url": "https://api.github.com/users/alex-dima/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The problem stems from PyTorch lightning trying to load the model as far as I can see. Models should be loaded via the `from_pretrained` method we provide. If you are using `.load_state_dict()` it's up to you to clean the state dict you have and make sure it only contains keys the model accepts/",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,688 | 1,691 | 1,691 |
NONE
| null |
### System Info
- `transformers` version: 4.30.2
- Platform: macOS-13.2.1-arm64-arm-64bit
- Python version: 3.11.4
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@younesbelkada @ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I saved a PyTorch lightning model which contained a RoGPT2-large model from version 4.27.4. When trying to load the model in version 4.30.2, I get the following error:
Traceback (most recent call last):
File "/Users/alexandrudima/home/Research/Outlook Add-ins/Optimize/backend.py", line 23, in <module>
generation_model = GenerationModel.load_from_checkpoint(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Caskroom/miniforge/base/envs/ml/lib/python3.11/site-packages/pytorch_lightning/core/module.py", line 1520, in load_from_checkpoint
loaded = _load_from_checkpoint(
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Caskroom/miniforge/base/envs/ml/lib/python3.11/site-packages/pytorch_lightning/core/saving.py", line 89, in _load_from_checkpoint
storage = _load_state(cls, checkpoint, strict=strict, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Caskroom/miniforge/base/envs/ml/lib/python3.11/site-packages/pytorch_lightning/core/saving.py", line 154, in _load_state
keys = obj.load_state_dict(checkpoint["state_dict"], strict=strict)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Caskroom/miniforge/base/envs/ml/lib/python3.11/site-packages/torch/nn/modules/module.py", line 2041, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for GenerationModel:
Unexpected key(s) in state_dict: "model.transformer.h.0.attn.bias", "model.transformer.h.0.attn.masked_bias", "model.transformer.h.1.attn.bias", "model.transformer.h.1.attn.masked_bias", "model.transformer.h.2.attn.bias", "model.transformer.h.2.attn.masked_bias", "model.transformer.h.3.attn.bias", "model.transformer.h.3.attn.masked_bias", "model.transformer.h.4.attn.bias", "model.transformer.h.4.attn.masked_bias", "model.transformer.h.5.attn.bias", "model.transformer.h.5.attn.masked_bias", "model.transformer.h.6.attn.bias", "model.transformer.h.6.attn.masked_bias", "model.transformer.h.7.attn.bias", "model.transformer.h.7.attn.masked_bias", "model.transformer.h.8.attn.bias", "model.transformer.h.8.attn.masked_bias", "model.transformer.h.9.attn.bias", "model.transformer.h.9.attn.masked_bias", "model.transformer.h.10.attn.bias", "model.transformer.h.10.attn.masked_bias", "model.transformer.h.11.attn.bias", "model.transformer.h.11.attn.masked_bias", "model.transformer.h.12.attn.bias", "model.transformer.h.12.attn.masked_bias", "model.transformer.h.13.attn.bias", "model.transformer.h.13.attn.masked_bias", "model.transformer.h.14.attn.bias", "model.transformer.h.14.attn.masked_bias", "model.transformer.h.15.attn.bias", "model.transformer.h.15.attn.masked_bias", "model.transformer.h.16.attn.bias", "model.transformer.h.16.attn.masked_bias", "model.transformer.h.17.attn.bias", "model.transformer.h.17.attn.masked_bias", "model.transformer.h.18.attn.bias", "model.transformer.h.18.attn.masked_bias", "model.transformer.h.19.attn.bias", "model.transformer.h.19.attn.masked_bias", "model.transformer.h.20.attn.bias", "model.transformer.h.20.attn.masked_bias", "model.transformer.h.21.attn.bias", "model.transformer.h.21.attn.masked_bias", "model.transformer.h.22.attn.bias", "model.transformer.h.22.attn.masked_bias", "model.transformer.h.23.attn.bias", "model.transformer.h.23.attn.masked_bias", "model.transformer.h.24.attn.bias", "model.transformer.h.24.attn.masked_bias", "model.transformer.h.25.attn.bias", "model.transformer.h.25.attn.masked_bias", "model.transformer.h.26.attn.bias", "model.transformer.h.26.attn.masked_bias", "model.transformer.h.27.attn.bias", "model.transformer.h.27.attn.masked_bias", "model.transformer.h.28.attn.bias", "model.transformer.h.28.attn.masked_bias", "model.transformer.h.29.attn.bias", "model.transformer.h.29.attn.masked_bias", "model.transformer.h.30.attn.bias", "model.transformer.h.30.attn.masked_bias", "model.transformer.h.31.attn.bias", "model.transformer.h.31.attn.masked_bias", "model.transformer.h.32.attn.bias", "model.transformer.h.32.attn.masked_bias", "model.transformer.h.33.attn.bias", "model.transformer.h.33.attn.masked_bias", "model.transformer.h.34.attn.bias", "model.transformer.h.34.attn.masked_bias", "model.transformer.h.35.attn.bias", "model.transformer.h.35.attn.masked_bias".
I suspect that something is not ok with the _keys_to_ignore_on_load_unexpected parameter defined in GPT2LMHeadModel, but I have no idea what could be the problem.
P.S. The model can be loaded without problems when using transformers==4.27.4.
### Expected behavior
The model should also be loadeed without problems in 4.30.2.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24581/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24581/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24580
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24580/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24580/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24580/events
|
https://github.com/huggingface/transformers/pull/24580
| 1,781,016,618 |
PR_kwDOCUB6oc5UQJXi
| 24,580 |
๐ย [i18n-KO] Translatedย `custom_tools.mdx` to Korean
|
{
"login": "sim-so",
"id": 96299403,
"node_id": "U_kgDOBb1piw",
"avatar_url": "https://avatars.githubusercontent.com/u/96299403?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sim-so",
"html_url": "https://github.com/sim-so",
"followers_url": "https://api.github.com/users/sim-so/followers",
"following_url": "https://api.github.com/users/sim-so/following{/other_user}",
"gists_url": "https://api.github.com/users/sim-so/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sim-so/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sim-so/subscriptions",
"organizations_url": "https://api.github.com/users/sim-so/orgs",
"repos_url": "https://api.github.com/users/sim-so/repos",
"events_url": "https://api.github.com/users/sim-so/events{/privacy}",
"received_events_url": "https://api.github.com/users/sim-so/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"LGTM๐",
"Thanks for the PR, @sim-so! \r\n\r\nThere's since been an update on main to all of our documentation files. Could you update the extension from `.mdx` to `.md` to match please?",
"Thanks for letting me know about the change! @amyeroberts! \r\nI updated the extention to `.md`.\r\n\r\nCould you review this PR?\r\n@sgugger, @ArthurZucker, @eunseojo"
] | 1,688 | 1,689 | 1,689 |
CONTRIBUTOR
| null |
# What does this PR do?
Translated the `custom_tools.mdx` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [x] Check for missing / redundant translations (๋ฒ์ญ ๋๋ฝ/์ค๋ณต ๊ฒ์ฌ)
- [x] Grammar Check (๋ง์ถค๋ฒ ๊ฒ์ฌ)
- [x] Review or Add new terms to glossary (์ฉ์ด ํ์ธ ๋ฐ ์ถ๊ฐ)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview๋ก ์ ์์๋ ํ์ธ)
## Who can review? (Initial)
Team PseudoLab, may you please review this PR?
@0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
@sgugger, @ArthurZucker, @eunseojo
May you please review this PR?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24580/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24580/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24580",
"html_url": "https://github.com/huggingface/transformers/pull/24580",
"diff_url": "https://github.com/huggingface/transformers/pull/24580.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24580.patch",
"merged_at": 1689591850000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24579
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24579/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24579/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24579/events
|
https://github.com/huggingface/transformers/issues/24579
| 1,781,001,591 |
I_kwDOCUB6oc5qJ-13
| 24,579 |
Datasets in run_translation.py
|
{
"login": "SoyGema",
"id": 24204714,
"node_id": "MDQ6VXNlcjI0MjA0NzE0",
"avatar_url": "https://avatars.githubusercontent.com/u/24204714?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SoyGema",
"html_url": "https://github.com/SoyGema",
"followers_url": "https://api.github.com/users/SoyGema/followers",
"following_url": "https://api.github.com/users/SoyGema/following{/other_user}",
"gists_url": "https://api.github.com/users/SoyGema/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SoyGema/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SoyGema/subscriptions",
"organizations_url": "https://api.github.com/users/SoyGema/orgs",
"repos_url": "https://api.github.com/users/SoyGema/repos",
"events_url": "https://api.github.com/users/SoyGema/events{/privacy}",
"received_events_url": "https://api.github.com/users/SoyGema/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @Rocketknight1 since this is a TensorFlow example.",
"Hey @Rocketknight1 ๐ I think we crossed in #24341 . Thanks for the Notebook repository discovery . \r\nWas a nice quick fix !\r\n\r\nIยดve given another try to some vector thoughts posted in the issue.\r\n* Regarding **reproducibility**. Read the script again, and tried to get some distance and analyze it as an isolated example coming from the library. The script is quite _structured_ and _documentation comments are well-suited_, it _generalizes really well_. Adding here the dataset name wouldnยดt really work . Besides, if the dataset associated to the examples changes, it would require a change. At this point maybe would add a small sentence with a recommendation to go through the README.md , so the example remains general/scalable across various datatasets? But minor in retrospective. Makes sense to you ? \r\n* Sent a PR to fix the broken link\r\n\r\nThanks for the script structure and the guidance! ๐\r\n",
"### Comments on Issue 1\r\nCurrently the run_translation.py script works well with [wtm16](https://huggingface.co/datasets/wmt16) dataset , as it provides splitting for train, test and validation . \r\n\r\n\r\nIยดm closing this issue, as the dataset for running the script has been found, and the broken link was fixed through a PR #24594 "
] | 1,688 | 1,688 | 1,688 |
CONTRIBUTOR
| null |
### System Info
Hello there! ๐
I'm following along the [run_translation.py ](https://github.com/huggingface/transformers/blob/main/examples/tensorflow/translation/run_translation.py) example.
Thanks for making it! It expands gratefully from the translation docs [tutorial ](https://huggingface.co/docs/transformers/tasks/translation)
### Context
Managed to configure flags for training, When launching in CLI
`
python train_model.py --model_name_or_path '/Users/.../The-Lord-of-The-Words-The-two-frameworks/src/models/t5-small' --output_dir '/en-ru-model' --dataset_name '/Users/.../The-Lord-of-The-Words-The-two-frameworks/src/data/opus_books' --dataset_config_name en-ru --do_train --source_lang en --target_lang ru --num_train_epochs 1 --overwrite_output_dir
`
the following error appears
```
raise TypeError("Dataset argument should be a datasets.Dataset!")
TypeError: Dataset argument should be a datasets.Dataset!
```
Then read [forum recommendation](https://discuss.huggingface.co/t/quick-tour-train-using-tensorflow-gives-dataset-argument-should-be-a-datasets-dataset-error/33657), tried to launch the training commenting the `tf_eval_dataset `creation , and launched training. The **model trained** without having the [eval_dataset](https://github.com/huggingface/transformers/blob/main/examples/tensorflow/translation/run_translation.py#L535).
When I passed the flag `--do_eval` it raised error flagged [here](https://github.com/huggingface/transformers/blob/main/examples/tensorflow/translation/run_translation.py#L444)
I downloaded the [opus books dataset](https://huggingface.co/datasets/opus_books) and I saw in the README.md that it donยดt have a validation split
```
- config_name: en-ru
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- ru
splits:
- name: train
num_bytes: 5190880
num_examples: 15496
download_size: 1613419
dataset_size: 5190880
```
### Issue 1. Reproducibility coming from tutorial
* Can you please confirm that this **example runs straightforward** with [WMT19](https://huggingface.co/datasets/wmt19) and that I might not have this issue taking this dataset and not the opus books one?
* Would you be willing to accept a PR with a comment in the example either pointing to the readme table or making more explicit that this example comes with a specific dataset with its link around [here](https://github.com/huggingface/transformers/blob/main/examples/tensorflow/translation/run_translation.py#L311) ? Is there a way you think I could help those users having the path from docs tutorial to script example ?
Am I missing something ? I think it's dataset related but Im not sure anymore...
### Issue 2. Broken link
Found a [broken link](https://huggingface.co/docs/datasets/loading_datasets.html.), if you are ok iยดll fix it with [this](https://huggingface.co/docs/datasets/loading)
#### Dependencies
```
transformers==4.31.0.dev0
tensorflow-macos==2.10.0
```
#### Tangential and mental model
I'm actually following [this script](https://github.com/SoyGema/The-Lord-of-The-Words-The-two-frameworks/blob/main/src/models/train_model.py) which is a copy, that came recommended from #24254 . Please let me know if something has changed. Im seeing the [history](https://github.com/huggingface/transformers/commits/main/examples/tensorflow/translation/run_translation.py) and last commit seems from Jun7 and mine is Jun13
I grouped the broken link with dataset in one issue as it might impact 1 PR for Reproducibility, but let me know if you prefer them separately.
Thanks so so much for your help ๐ & thanks for the library!
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. run the script
2. download opus books dataset
3. config flags
4. run script with and without eval_dataset logic
### Expected behavior
- Dataset ? Either with link in README.md or in script commented?
- Correct link for
Tagging @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24579/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24579/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24578
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24578/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24578/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24578/events
|
https://github.com/huggingface/transformers/pull/24578
| 1,780,980,274 |
PR_kwDOCUB6oc5UQBWV
| 24,578 |
fix peft ckpts not being pushed to hub
|
{
"login": "pacman100",
"id": 13534540,
"node_id": "MDQ6VXNlcjEzNTM0NTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pacman100",
"html_url": "https://github.com/pacman100",
"followers_url": "https://api.github.com/users/pacman100/followers",
"following_url": "https://api.github.com/users/pacman100/following{/other_user}",
"gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pacman100/subscriptions",
"organizations_url": "https://api.github.com/users/pacman100/orgs",
"repos_url": "https://api.github.com/users/pacman100/repos",
"events_url": "https://api.github.com/users/pacman100/events{/privacy}",
"received_events_url": "https://api.github.com/users/pacman100/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,688 | 1,688 | 1,688 |
CONTRIBUTOR
| null |
# What does this PR do?
1. Currently, when ckpts are saved every `save_steps` and `push_to_hub=True`, PEFT ckpts aren't being pushed to hub.
Reason:
every `save_steps` , the ckpts are being written to local `checkpoint-xx` folders. Now, in the trainer's function `_push_from_checkpoint` , it will copy the model files from the latest ckpt folder `checkpoint-xx` to the `output_dir` which gets pushed to the Hub. Note that the modeling_files doesn't have `ADAPTER_WEIGHTS_NAME` or `ADAPTER_SAFE_WEIGHTS_NAME` or `ADAPTER_CONFIG_NAME` leading to them not being pushed.
This PR fixes it.
cc @younesbelkada
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24578/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24578/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24578",
"html_url": "https://github.com/huggingface/transformers/pull/24578",
"diff_url": "https://github.com/huggingface/transformers/pull/24578.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24578.patch",
"merged_at": 1688063865000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24577
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24577/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24577/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24577/events
|
https://github.com/huggingface/transformers/pull/24577
| 1,780,950,537 |
PR_kwDOCUB6oc5UP68d
| 24,577 |
Add imageArray
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24577). All of your documentation changes will be reflected on that endpoint.",
"@ydshieh Adding you as a first reviewer as you're always able to give good SWE advice :) This is quite a big design decision, so will be following up with requests from reviews from others once we've ironed out details here, if that's OK. ",
"@amyeroberts Thanks for requesting me for the first review. I would like to hear a bit more from you despite not looking through the changes yet ๐ \r\n\r\n- So `ImageObject` will be only used inside the methods of image processor/processing, and the arguments/return values will remain as numpy array?\r\n\r\n- From the changes in image processor files, I don't see how this new class helps reducing/simplifying the logic of processing images. Am I missing anything ...?",
"@ydshieh \r\n\r\n> So ImageObject will be only used inside the methods of image processor/processing, and the arguments/return values will remain as numpy array?\r\n\r\nMostly. In most cases, this won't be seen by users because the image processors are called with a framework specified e.g. `return_tensors=\"pt\"`\r\n\r\nIf the user doesn't specify `return_tensors`, then the returned objects would be a list of `ImageObject`. Currently it `return_tensors=None` returns a list of numpy arrays. \r\n\r\nI could make sure to return numpy arrays at the end of processing with something like:\r\n\r\n```py\r\nimages = [image.numpy() for image in images]\r\n``` \r\n\r\nbefore passing to `BatchFeature`. \r\n\r\nIn the future. once the image transformers have been adapted to use the `ImageObject` attributes, then users will see `ImageObject` returned if they called methods directly, either on the image processor or from the transforms library:\r\n\r\n```py\r\nfrom transformers.image_transforms import resize\r\n\r\n# Returned resized_image is an ImageObject object\r\nresized_image = image_processor.resize(image, size={\"height\": h, \"width\": w})\r\nresized_image = resize(image, size={\"height\": h, \"width\": w})\r\n```\r\n\r\n> From the changes in image processor files, I don't see how this new class helps reducing/simplifying the logic of processing images. Am I missing thing ...?\r\n\r\nIt doesn't yet. This is just introducing the object and replacing the current numpy arrays to ensure everything still works as-is. Part of the next steps is updating logic in e.g. `image_transforms` and in some of the array logic to simplifying things are reduce repeated calculations. \r\n",
"OK Thank you.\r\n\r\nIn this PR, `image` argument of `preprocess` remains as numpy array, which is โ
. The return values should keep as numpy array (if not returning tensor) for which you will update in this PR โ
.\r\n\r\nIn the next PR, you will update the file `src/transformers/image_transforms.py` to use `ImageObject` โ
.\r\n\r\nThe only thing I am a bit worried is if the input/output of methods in that file will be changed: Those are still considered as public methods (right? We should discuss this with @sgugger anyway.) and we should keep them still accepting numpy array input, and return numpy array if it is currently. This might cause the conversion between numpy array <--> ImageObject several times, for which I am not 100% sure if you would love.\r\n\r\nHowever, this is a question for the next PR, not for this PR.\r\n\r\n",
"Hi @amyeroberts \r\n\r\nCould you remind me the reason to remove `ImageObject` and only use `ImageArray`. I just need to refresh my memory, thank you ๐ ",
"@ydshieh Of course :) Sorry, I should have added some explanatory comments. \r\n\r\nI actually just renamed `ImageObject` to `ImageArray` - the class hasn't been removed.\r\n\r\nI did remove casting inputs to `ImageObject` / `ImageArray` in the image processors as it make the PR big and required tackling a few parts of the processing logic which I believe is out of scope. ",
"@sgugger Yes, I understand. Tbh, I'd rather not have this class. Originally I wanted just a wrapper around the array that could be passed along with the image instead of additional arguments everywhere in the processing methods and functions. Unfortunately, it's necessary to wrap the array like this to have the state persist with numpy array operations and not needing tonnes of extra handling code. \r\n\r\nI'm going to quickly write up an alternative with passing arguments around and compare the two. \r\n\r\n\r\n\r\n\r\n\r\n\r\n",
"Closing as superseded for #25464, a bit uglier but simpler solution :) \r\n\r\n@ydshieh @rafaelpadilla @sgugger Thank you for your extensive reviews and help on this PR. "
] | 1,688 | 1,692 | 1,692 |
COLLABORATOR
| null |
# What does this PR do?
Adds a new class `ImageArray` for use as part of the image processing pipeline. It acts as an array container, which we can use to store information about the image e.g. the data format. This is the recommended way to create 'array-like' numpy objects with persistent attributes: https://numpy.org/doc/stable/user/basics.dispatch.html
The intention is to enable users to explicitly set information about the image e.g. data_format and have that carried through the processing pipeline in a stateful way. This addresses issues where the input image(s) information is repeatedly inferred unnecessarily in functions, or when it's ambiguous e.g. image of shape `(3, 3, 3)`. See:
* #21981
* #21638
* #22577
Defining `__array_ufunc__` and `__array_function__` means `ImageArray` can have numpy operations e.g.
```python
>>> from transformers.image_utils import ImageArray
>>> import numpy as np
>>> x = np.random.randint(0, 256, (2, 2, 3))
>>> img = ImageArray(x)
>>> img
ImageArray([[[ 20 232 120]
[197 244 147]]
[[ 47 241 95]
[ 73 251 140]]], data_format=channels_last, num_channels=3, shape=(2, 2, 3))
# Standard array operations - multiplication, addition etc. are possible
>>> img * 2
ImageArray([[[ 40 464 240]
[394 488 294]]
[[ 94 482 190]
[146 502 280]]], data_format=channels_last, num_channels=3, shape=(2, 2, 3))
>>> img + img
ImageArray([[[ 40 464 240]
[394 488 294]]
[[ 94 482 190]
[146 502 280]]], data_format=channels_last, num_channels=3, shape=(2, 2, 3))
# Numpy functions and array methods can be used
>>> np.mean(img, axis=-1)
ImageArray([[124. 196. ]
[127.66666667 154.66666667]], data_format=none, num_channels=0, shape=(2, 2))
>>> img.mean(axis=-1)
ImageArray([[124. 196. ]
[127.66666667 154.66666667]], data_format=none, num_channels=0, shape=(2, 2))
# Supports slicing
>>> img[:, :, 1]
ImageObject([[232 244]
[241 251]], data_format=none, num_channels=0, shape=(2, 2))
# Supports type casting
>>> img.astype(np.float32)
ImageObject([[[ 20. 232. 120.]
[197. 244. 147.]]
[[ 47. 241. 95.]
[ 73. 251. 140.]]], data_format=channels_last, num_channels=3, shape=(2, 2, 3))
# Can be cast back as a numpy array
>>> np.array(img)
array([[[ 20, 232, 120],
[197, 244, 147]],
[[ 47, 241, 95],
[ 73, 251, 140]]])
# Is a numpy array isinstance
>>> isinstance(img, np.ndarray)
True
```
## ๐ช ๐ช ๐ช Tricky bits ๐ช ๐ช ๐ช
Although this enables the ImageArray to be used directly in existing numpy logic, it does create issues when interfacing between other frameworks like `torch` or `PIL`. The following operations fail:
```
PIL.Image.fromarray(img)
torch.from_numpy(img)
```
This is because these libraries directly access the underlying memory using python's buffer protocol. As far as I can tell, there is no direct way of exposing this on the Python side, and it would require writing c code to enable. This seems like overkill to me. The only case I know this to cause an issue, is in the pix2struct image processor which uses some [torch specific logic](https://github.com/huggingface/transformers/blob/6eedfa6dd15dc1e22a55ae036f681914e5a0d9a1/src/transformers/models/pix2struct/image_processing_pix2struct.py#L260) (which ideally would be removed).
As image processor are almost exclusively used with direct calls i.e. `image_processor(img, return_tensors="pt")`, and the torch.tensor batch conversion still works, I don't expect this to cause many issues.
One way of getting this to work is to return numpy arrays when array methods are called:
* `np.mean(arr) would return an ImageArray, `image_array.mean(...)` would return a numpy array.
Tbh, I wasn't able to completely figure out the interplay between this functionality as `torch.from_numpy` seems to be just calling C code.
## Next steps
- [ ] Adapt functionality in `image_transforms` to use the
- [ ] Add some logic for array operations to remove repeatedly finding e.g. `num_channels` when resulting array is created
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24577/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24577/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24577",
"html_url": "https://github.com/huggingface/transformers/pull/24577",
"diff_url": "https://github.com/huggingface/transformers/pull/24577.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24577.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/24576
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24576/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24576/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24576/events
|
https://github.com/huggingface/transformers/pull/24576
| 1,780,920,986 |
PR_kwDOCUB6oc5UP0XW
| 24,576 |
Fix ESM models buffers
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24576). All of your documentation changes will be reflected on that endpoint."
] | 1,688 | 1,688 | 1,688 |
COLLABORATOR
| null |
# What does this PR do?
Apparently the keys are important to be loaded from the state dict even if we create them at init.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24576/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24576/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24576",
"html_url": "https://github.com/huggingface/transformers/pull/24576",
"diff_url": "https://github.com/huggingface/transformers/pull/24576.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24576.patch",
"merged_at": 1688050521000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24575
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24575/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24575/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24575/events
|
https://github.com/huggingface/transformers/issues/24575
| 1,780,919,448 |
I_kwDOCUB6oc5qJqyY
| 24,575 |
๐ Text Generation docs rework
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hello there @gante ! ๐\r\nFirst and foremost, thanks for opening a discussion about this .\r\n\r\nWould like to glimpse a couple of vector thoughts about point 1, as point 2 and 3 seem consistent and structured enough\r\n\r\n### 1. Designing a \"home page\" for text generation docs\r\n\r\nAs a user , the underlying pattern that Iยดm glimpsing from the **Tutorials** section make reference to _how to use the library from the engineering perspective, in abstract terms_ ( preprocess data, Share your model, fine-tune etc ) . The tutorials seem like a library approach for a โHOW-TO โ do things. In fact, in Tutorials section, several examples about vision, audio and language are displayed. \r\nI would think about putting a **Text Generation** section directly in Task guides , inside Natural Language processing, at the top , as it is related to a challenge to solve ( Text classification, Token classification ) . This doesnโt entail that one of the main โHOW-TOsโ related to text generation would be included inside Tutorials as a section. From what Iโm taking for the [guide](https://huggingface.co/blog/how-to-generate), there is an insightful section of _Search_ and _Sampling_, that could be added to the Tutorials, and a more detailed clarification added in Tasks and Developer guides.\r\n\r\nThe thing is that following this schema, at first sight ( abstracting **main challenge** from guide in **Tutorials** and add a **robust example or some \"home-page\"** references in **Tasks** with link to developer guides ) seems more _coherent with your current structure._ \r\n\r\nOn the other hand, and tangential, why not adding a [LLMs leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) link somewhere (maybe in point 2) so the users can be mindful about the state of the art of the models in terms of perf for **Text generation** tasks ? \r\n\r\n\r\nHope I explained myself clearly enough ๐งญ\r\nThanks again for the open discussion! And for making the library! ๐\r\n",
"Big +1 on the 3.) point. I think self-explanatory code with better doc string examples and arg/kwarg checking would take us a long way! \r\n\r\nRE: 1.) Yes this makes sense, but I think a single concise page is probably better than a homepage that links to everything we already have as this might be too difficult to keep up to date and might also be too complex. A single concise page under \"Tutorials\" which is a strong iteration on the blog post \"how to generate\" (which is still one of our most read blog posts) could be a good starting point. The blog post does a very good job at explaining how LLMs fundamentally work. It is however not up to date anymore and also puts too much focus on things like top_k and top_p. So a strong incremental improvement (more tailored to new LLMs) could serve as the main point of introduction to text-generation and be put under \"Tutorials\".\r\n\r\nRE: 2.) Yes I think these could all go into Developer guides and be a nice iterative improvement to \"Customize Text generation strategy\"",
"Hi @gante - I love the plan.\r\n\r\nHere are a couple of quick suggestions:\r\n1. Big +1 on validation with parameters passed to generate, or even just changing the error message to point to [text generation strategies post](https://huggingface.co/docs/transformers/generation_strategies)\r\n2. I agree with @patrickvonplaten - Instead of a bouquet of docs, just one simple and concise doc page would do more wonders than not.\r\n3. I think a good way to structure would be starting from the basics - explaining the default behaviour (greedy search) and work our way up to other strategies. What would be helpful is to provide suggested parameter values along with the strategy as well.\r\n4. Each of the above strategy can be paired with two `toggle` based snippets, how to generate with `pipeline` and how to generate with `processor + generate` -> this will help cater to our rough user base.\r\n5. We can end the blog post with all the cool tricks that are not part of the generate `yet`, link it to a gh repo or gist. These are examples like generate with `ggml`, `gptq` integration and so on.\r\n6. [Long term] once we have this page in place we can work our way to update the model cards on text-gen models to add a link to it. I reckon it'll just be a batch PR.\r\n\r\nCheers!",
"Adding here that it would be nice to include a section on batched generation (I made a PR for GPT-2 [here](https://github.com/huggingface/transformers/pull/24432)). This is not that intuitive for people as you need to pad from the left, set the padding token appropriately in the tokenizer, etc.",
"Thank you for the feedback folks ๐ \r\n\r\nI've incorporated your feedback in the plan, which got edited. Main differences:\r\n- Part 1 now consists in updating the existing blog post and creating a short usage tutorial (with references to the blog post and advanced docs over its contents, as opposed to a stand-alone section with links, like the other tutorials)\r\n- Part 2 got condensed to reduce the long-term maintenance burden",
"Thanks for taking the time to outline this detailed rework! Big +1 for the additional developer guides and tutorial. โค๏ธ\r\n\r\nI would consider leaving the blog post as is also to reduce long-term maintenance and instead keep all the relevant text generation content in the docs. In general, I feel like a blog post is more appropriate for \"timely\" content like announcements/news or explaining why something was designed the way it was. Content that needs to be maintained and updated is better in the docs I think. As you mentioned, the how-to-generate blog post still contains super useful background info about text generation so I think we should definitely find a way to preserve that info. My suggestions would be to:\r\n\r\n- link from the blog post to the docs for the latest changes (could be a simpler banner at the top like [this](https://play.tailwindcss.com/MqrXeJutFi))\r\n- create a doc in the Conceptual Guide section to hold the background info from the how-to-generate blog post",
"As discussed with @gante, I'll start working on an LLM prompting guide (part 2.2 (\"Prompting\" )).",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,688 | 1,704 | 1,704 |
MEMBER
| null |
# What is this?
This is an issue to discuss and track the rework of the docs for text generation. Comments and feedback are appreciated, as always ๐ค
# Current issues
1. Our [main reference for text generation](https://huggingface.co/blog/how-to-generate) is not in the docs and is quite outdated
2. The docs regarding text generation are scattered, and it is not simple to navigate between them -- the reader has to know where to look for them
3. We lack examples beyond the simplest forms of text generation
4. We have undocumented advanced use cases, such as setting a custom stopping criteria
5. We are not clear about what the user can't do
# Proposed plan
EDIT:
- incorporated feedback up to [this comment](https://github.com/huggingface/transformers/issues/24575#issuecomment-1617747844) (including)
- Also includes [this comment](https://github.com/huggingface/transformers/pull/25240#issuecomment-1668029668)
I'd like to split the plan into three parts:
1. Designing a simpler entry point to text generation, from which all related documentation is discoverable
2. Upgrading the developer guides to cover the full potential of text generation
3. Make our code more self-documenting and other code changes
## 1. Designing a simpler entry point for text generation docs
Tackles issues 1 and 2.
This part is further divided into two actions:
- [x] The (blog post)[https://huggingface.co/blog/how-to-generate] is still a solid reference for the background in text generation, but it holds old examples (`tensorflow`!) and focuses a bit too much on `top_p`/`top_k`. Let's retouch it.
- [x] Create a short tutorial to serve as an entry point to the multiple forms of text generation. Like the other tutorials, it contains references to related docs throughout the text (let's see if it is enough to handle discoverability -- we can create a stand-alone related docs section in the future if needed). It would also cover a few basics like "use left-padding when doing batched generation with decoder-only models" and "double-check your generate kwargs".
Related docs:
1. Tasks
2. Related developer guides
3. API reference
4. Outside `transformers` (e.g. `optimum`, `text-generation-inference`, LLM leaderboard, non-HF libs like `autogptq`?)
## 2. Upgrading the developer guides
Tackles issues 3 and 4.
We currently have [one developer guide](https://huggingface.co/docs/transformers/generation_strategies), which writes about the API and a few basic ways to manipulate text generation. I propose we improve the existing one and add 2 new guides, preferably with examples that cover more modalities and use cases:
- [ ] 1. Improve the existing guide -- Add a section about the impact of logits processors, and another on how stopping conditions operate.
- [x] 2. "Prompting" -- Some basic "do and don'ts" regarding prompting and how different types of models respond differently to it (encoder-decoder vs decoder, instruction-tuned vs base), the importance of prompting on chat applications
- [x] 3. Using LLMs, with a focus on the 1st L (large) -- write about variable types, quantization, device mapping, advanced architectures (alibi, rope, MQA/GQA), flash attention
- [ ] 4. Advanced examples (name?) -- Concrete use cases that make use of many features at once, to serve as inspiration: how to control between extractive and abstraction summarization, retrival-augmented generation, and other modality-specific examples
## 3. Self-documenting code and other code changes
Tackles issues 3 and 5.
- [x] Let's be honest -- the best user experience is when no docs are needed at all. We can improve our game here, by performing parameterization validation. Currently, our validation step is very superficial, and users are allowed to do things like passing `temperature` with `do_sample=False`, ultimately resulting in GH issues. I'd suggest performing a hard validation and throwing informative exceptions, pointing to the redesigned docs ๐ค
- [x] In parallel, our logits processors and stopping condition classes are missing docstring examples on how to use them. This should make our API reference much more robust.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24575/reactions",
"total_count": 7,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 7,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24575/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24574
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24574/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24574/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24574/events
|
https://github.com/huggingface/transformers/pull/24574
| 1,780,702,906 |
PR_kwDOCUB6oc5UPEis
| 24,574 |
Revert "Fix typing annotations for FSDP and DeepSpeed in TrainingArguments"
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24574). All of your documentation changes will be reflected on that endpoint."
] | 1,688 | 1,688 | 1,688 |
COLLABORATOR
| null |
Reverts huggingface/transformers#24549
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24574/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24574/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24574",
"html_url": "https://github.com/huggingface/transformers/pull/24574",
"diff_url": "https://github.com/huggingface/transformers/pull/24574.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24574.patch",
"merged_at": 1688040884000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24573
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24573/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24573/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24573/events
|
https://github.com/huggingface/transformers/pull/24573
| 1,780,543,089 |
PR_kwDOCUB6oc5UOhZv
| 24,573 |
Check all objects are equally in the main `__init__` file
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"We cannot remove things from the main init, as it would be a breaking change. The `BertLayer` and the likes should never have been made public like this but it's too late now. And we shouldn't add the corresponding TF/Flax objects either, so this defeats the purpose of this new check.",
"I can add exceptional cases list though and not to touch those problematic entries.",
"Sure!"
] | 1,688 | 1,688 | 1,688 |
COLLABORATOR
| null |
# What does this PR do?
Add one more check in `check_repo.py` to further ensure objects are in the main `__init__`. (using our pytorch objects as the reference)
### Actions to take
(Do you agree @sgugger ?)
- The following should be added to the main `__init__`:
```bash
TFAutoModelForAudioClassification should be defined in the main `__init__` file.
TFAutoModelForMaskGeneration should be defined in the main `__init__` file.
TFAutoModelForMaskedImageModeling should be defined in the main `__init__` file.
```
- A fix is required:
```bash
TF_AutoModelForSemanticSegmentation should be defined in the main `__init__`
file.
```
due to the mistake
```
TF_AutoModelForSemanticSegmentation = auto_class_update(
TFAutoModelForSemanticSegmentation, head_doc="semantic segmentation"
)
```
- The following (the **pytorch** one) should be **removed** from the main `__init__`:
```bash
TFBertLayer should be defined in the main `__init__` file.
FlaxBertLayer should be defined in the main `__init__` file.
FlaxBigBirdLayer should be defined in the main `__init__` file.
TFLxmertEncoder should be defined in the main `__init__` file.
TFLxmertXLayer should be defined in the main `__init__` file.
TFMPNetLayer should be defined in the main `__init__` file.
TFMobileBertLayer should be defined in the main `__init__` file.
FlaxRoFormerLayer should be defined in the main `__init__` file.
TFSegformerLayer should be defined in the main `__init__` file.
TFViTMAELayer should be defined in the main `__init__` file.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24573/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24573/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24573",
"html_url": "https://github.com/huggingface/transformers/pull/24573",
"diff_url": "https://github.com/huggingface/transformers/pull/24573.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24573.patch",
"merged_at": 1688053800000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24572
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24572/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24572/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24572/events
|
https://github.com/huggingface/transformers/pull/24572
| 1,780,535,572 |
PR_kwDOCUB6oc5UOfvi
| 24,572 |
Docs: 4 bit doc corrections
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,688 | 1,688 | 1,688 |
MEMBER
| null |
# What does this PR do?
Some 4-bit references were written with "8" instead of "4"
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24572/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24572/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24572",
"html_url": "https://github.com/huggingface/transformers/pull/24572",
"diff_url": "https://github.com/huggingface/transformers/pull/24572.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24572.patch",
"merged_at": 1688040800000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24571
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24571/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24571/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24571/events
|
https://github.com/huggingface/transformers/pull/24571
| 1,780,533,902 |
PR_kwDOCUB6oc5UOfXT
| 24,571 |
Fix annotations
|
{
"login": "tony9402",
"id": 30228292,
"node_id": "MDQ6VXNlcjMwMjI4Mjky",
"avatar_url": "https://avatars.githubusercontent.com/u/30228292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tony9402",
"html_url": "https://github.com/tony9402",
"followers_url": "https://api.github.com/users/tony9402/followers",
"following_url": "https://api.github.com/users/tony9402/following{/other_user}",
"gists_url": "https://api.github.com/users/tony9402/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tony9402/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tony9402/subscriptions",
"organizations_url": "https://api.github.com/users/tony9402/orgs",
"repos_url": "https://api.github.com/users/tony9402/repos",
"events_url": "https://api.github.com/users/tony9402/events{/privacy}",
"received_events_url": "https://api.github.com/users/tony9402/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,688 | 1,688 | 1,688 |
CONTRIBUTOR
| null |
# What does this PR do?
I found wrong annotations in `MBart`, `Pegasus` model.
Fixed wrong annotation
- (seq_len, batch, embed_dim) -> (batch, seq_len, embed_dim)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24571/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24571/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24571",
"html_url": "https://github.com/huggingface/transformers/pull/24571",
"diff_url": "https://github.com/huggingface/transformers/pull/24571.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24571.patch",
"merged_at": 1688040320000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24570
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24570/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24570/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24570/events
|
https://github.com/huggingface/transformers/pull/24570
| 1,780,468,543 |
PR_kwDOCUB6oc5UORKK
| 24,570 |
Removal of deprecated vision methods and specify deprecation versions
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @sgugger Just wanting to double check it's OK to remove these deprecated methods now before I merge in "
] | 1,688 | 1,688 | 1,688 |
COLLABORATOR
| null |
# What does this PR do?
* Removes a bunch of properties, methods and logic which had deprecation warnings for a few versions ago.
* Adds some specific version deprecations for methods that didn't have a specified version for deprecation
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24570/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24570/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24570",
"html_url": "https://github.com/huggingface/transformers/pull/24570",
"diff_url": "https://github.com/huggingface/transformers/pull/24570.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24570.patch",
"merged_at": 1688047792000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24569
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24569/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24569/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24569/events
|
https://github.com/huggingface/transformers/issues/24569
| 1,780,361,524 |
I_kwDOCUB6oc5qHik0
| 24,569 |
LlamaTokenizer: Slow implementation opts for whitespace-lead token (different from fast)
|
{
"login": "lbeurerkellner",
"id": 17903049,
"node_id": "MDQ6VXNlcjE3OTAzMDQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/17903049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lbeurerkellner",
"html_url": "https://github.com/lbeurerkellner",
"followers_url": "https://api.github.com/users/lbeurerkellner/followers",
"following_url": "https://api.github.com/users/lbeurerkellner/following{/other_user}",
"gists_url": "https://api.github.com/users/lbeurerkellner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lbeurerkellner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lbeurerkellner/subscriptions",
"organizations_url": "https://api.github.com/users/lbeurerkellner/orgs",
"repos_url": "https://api.github.com/users/lbeurerkellner/repos",
"events_url": "https://api.github.com/users/lbeurerkellner/events{/privacy}",
"received_events_url": "https://api.github.com/users/lbeurerkellner/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Thanks for reporting, will have a look",
"Hi @ArthurZucker! Are you currently working on this? If not, I think I could fix it pretty quickly :)",
"Sure! Feel free to take it! ๐ I'll have a look soon otherwise\r\n",
"@ArthurZucker @lbeurerkellner I have done some debugging and I have a few observations. Firstly I have checked other tokenizers that use `LlamaTokenizer` or `LlamaTokenizerFast` and the results are pretty weird:\r\n\r\n1) the issue is not with `uns` but with any word after a special token like `<s>`. Why this is happening is pretty straightforward\r\n\r\n```\r\n# <s> is added to Trie so there is a split after its encounter in the text\r\ntokens = self.tokens_trie.split(text) # tokenization_utils.py:517\r\n```\r\nSo it seems like it was a deliberate decision to split special tokens like this? \r\n\r\n1) because of the above split, all slow tokenizers based on `LLaMaTokenizer` return `['<s>', 'โuns']`\r\n\r\n2) more interesting thing is that most of the tokenizers based on `LlamaTokenizerFast` split text into `['โ<s>', 'uns']` (e.g `fxmarty/tiny-llama-fast-tokenizer`). But for example `openlm-research/open_llama_3b` which is one of the most downloaded llama based models outputs `['<s>', 'โuns']` even thought it has the same tokenizer config like the one from fxmarty.\r\n\r\n```LlamaTokenizerFast(name_or_path='openlm-research/open_llama_3b', vocab_size=32000, model_max_length=2048, is_fast=True, padding_side='left', truncation_side='right', special_tokens={'bos_token': AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), 'eos_token': AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), 'unk_token': AddedToken(\"<unk>\", rstrip=False, lstrip=False, single_word=False, normalized=True)}, clean_up_tokenization_spaces=False)```",
"the fast is working properly! As suspected, this is linked to #24622 and #24565. I am working on a fix for all our spm based models. \r\n\r\nFor other tokenizers, I wouldnโt refer to them since a lot are outdated/donโt include some fixes",
"Actually this is fixed, the output is now `['โ<s>', 'uns'] ['<s>', 'uns']`. The fast just works that way for tokenization, but the output is the same. Use \r\n```python \r\nslow = AutoTokenizer.from_pretrained(model, use_fast=False, legacy = False)\r\n```\r\n"
] | 1,688 | 1,689 | 1,689 |
NONE
| null |
### System Info
- `transformers` version: 4.30.2
- Platform: Linux-5.15.0-75-generic-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker @youn
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Reproduction
Comparing slow and fast `LlamaTokenizer` instances with `huggyllama/llama-7b`.
```
from transformers import AutoTokenizer
model = "huggyllama/llama-7b"
fast = AutoTokenizer.from_pretrained(model)
slow = AutoTokenizer.from_pretrained(model, use_fast=False)
# use tokenize()
print(fast.tokenize("<s>uns"), slow.tokenize("<s>uns"))
# -> (['โ<s>', 'uns'], ['<s>', 'โuns'])
# use __call__
print(fast(f"{fast.bos_token}uns", add_special_tokens=False), slow(f"{slow.bos_token}uns", add_special_tokens=False))
# -> ({'input_ids': [1, 6948], 'token_type_ids': [0, 0], 'attention_mask': [1, 1]},
# {'input_ids': [1, 9644], 'attention_mask': [1, 1]})
# round-tripping
print(fast.convert_tokens_to_string(fast.tokenize("<s>uns")), fast.convert_tokens_to_string(slow.tokenize("<s>uns")))
# -> ('<s>uns', '<s> uns')
```
### Expected behavior
It looks like the slow LlamaTokenizer wrongly tokenises `uns`. I would not expect the additional whitespace when round-tripping or when tokenising in the first place.
Thanks a lot in advance.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24569/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24569/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24568
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24568/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24568/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24568/events
|
https://github.com/huggingface/transformers/issues/24568
| 1,780,178,142 |
I_kwDOCUB6oc5qG1ze
| 24,568 |
Set FSDP `transformer_layer_cls_to_wrap` to `model._no_split_modules` ?
|
{
"login": "apoorvkh",
"id": 7005565,
"node_id": "MDQ6VXNlcjcwMDU1NjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7005565?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/apoorvkh",
"html_url": "https://github.com/apoorvkh",
"followers_url": "https://api.github.com/users/apoorvkh/followers",
"following_url": "https://api.github.com/users/apoorvkh/following{/other_user}",
"gists_url": "https://api.github.com/users/apoorvkh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/apoorvkh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/apoorvkh/subscriptions",
"organizations_url": "https://api.github.com/users/apoorvkh/orgs",
"repos_url": "https://api.github.com/users/apoorvkh/repos",
"events_url": "https://api.github.com/users/apoorvkh/events{/privacy}",
"received_events_url": "https://api.github.com/users/apoorvkh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @pacman100 ",
"Any thoughts about this? Maybe also cc @stas00?",
"Unfortunately I don't have experience with FSDP to contribute to this discussion.",
"@pacman100 Friendly ping",
"Hello @apoorvkh, the code part you highlighted is enabled now only when using FSDP+XLA. For general FSDP, internally everything is handled by Accelerate. It happens here: https://github.com/huggingface/transformers/blob/66954ea25e342fd451c26ec1c295da0b8692086b/src/transformers/training_args.py#L1533-L1556\r\n\r\n`fsdp_transformer_layer_cls_to_wrap` support specifying multiple modules but most of the time it is enough to specify the `_no_split_modules`. So, we can have `_no_split_modules` as a default in case the user doesn't specify it when passing `--fsdp full_shard auto_wrap`. \r\n",
"PRs https://github.com/huggingface/accelerate/pull/1753 and https://github.com/huggingface/transformers/pull/24980 should add this capability wherein it will try `model. _no_split_modules ` if `fsdp_transformer_layer_cls_to_wrap ` isn't specified. Can you try it out?",
"Very cool, thanks a ton! I will try it out and let you know.",
"Just circling back, works on my end -- thanks again!",
"@pacman100 I want to better understand the mechanism of FSDP's wrapping. \r\n\r\nDo you know why `transformer_layer_cls_to_wrap` can be automatically assigned to `_no_split_module` by default? \r\n\r\nMy understanding of that latter is from [this post](https://huggingface.co/blog/accelerate-large-models):\r\n\r\n> Actually using this device map later on won't work, because the layers composing this model have residual connections (where the input of the block is added to the output of the block) so all of a given layer should be on the same device. We can indicate this to Accelerate by passing a list of module names that shouldn't be split with the **no_split_module_classes** keyword argument:\r\n\r\nI understand this means that the module should not be split during the forward pass. However, I am not sure I see the connection with `transformer_layer_cls_to_wrap`, which seems to be a way to indicate which class should be wrapped by [FSDP](https://pytorch.org/docs/stable/fsdp.html) (this is based on my limited understanding of FSDP).\r\n\r\nIs there a connection between those two variables, or is it simply a way to quickly find the name of the transformer layers (since it is named with a convention of `{model_name}DecoderLayer` but it is not always consistent)?"
] | 1,688 | 1,701 | 1,689 |
CONTRIBUTOR
| null |
### Feature request
Currently, when training with FSDP, the Trainer expects to receive an `fsdp_config` argument specifying `fsdp_transformer_layer_cls_to_wrap`.
https://github.com/huggingface/transformers/blob/66954ea25e342fd451c26ec1c295da0b8692086b/src/transformers/trainer.py#L1394-L1406
I am wondering if we can set this automatically, when the model has a `_no_split_modules` attribute, e.g.
https://github.com/huggingface/transformers/blob/66954ea25e342fd451c26ec1c295da0b8692086b/src/transformers/models/opt/modeling_opt.py#L401
### Motivation
It would be a convenient feature to set this automatically. This argument is model-specific, but it might be nice to define training arguments independently of a specific model type.
### Your contribution
Happy to help make a PR. Would be great if you can confirm whether this would be desirable or if I am misunderstanding something. Thanks!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24568/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24568/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24567
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24567/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24567/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24567/events
|
https://github.com/huggingface/transformers/issues/24567
| 1,780,119,294 |
I_kwDOCUB6oc5qGnb-
| 24,567 |
MT5 data padding not working
|
{
"login": "hexie1995",
"id": 39319332,
"node_id": "MDQ6VXNlcjM5MzE5MzMy",
"avatar_url": "https://avatars.githubusercontent.com/u/39319332?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hexie1995",
"html_url": "https://github.com/hexie1995",
"followers_url": "https://api.github.com/users/hexie1995/followers",
"following_url": "https://api.github.com/users/hexie1995/following{/other_user}",
"gists_url": "https://api.github.com/users/hexie1995/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hexie1995/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hexie1995/subscriptions",
"organizations_url": "https://api.github.com/users/hexie1995/orgs",
"repos_url": "https://api.github.com/users/hexie1995/repos",
"events_url": "https://api.github.com/users/hexie1995/events{/privacy}",
"received_events_url": "https://api.github.com/users/hexie1995/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @ArthurZucker ",
"Thank you. One additional information: I tried to follow step by step the official text summrization tutorial here: https://github.com/huggingface/notebooks/blob/main/examples/summarization.ipynb\r\nBut the same error occurred. Thanks a lot! ",
"Hey! Thanks for reporting could you share the entire traceback of the error? ๐ ",
"Sure, here's the whole error message. Thanks a lot!\r\n\r\n\r\n```\r\n`---------------------------------------------------------------------------\r\nRuntimeError Traceback (most recent call last)\r\nCell In[16], line 10\r\n 1 data_collator = DataCollatorForSeq2Seq(tokenizer, model=model)\r\n 2 trainer = Seq2SeqTrainer(\r\n 3 model=model,\r\n 4 args=training_args,\r\n (...)\r\n 8 data_collator=data_collator,\r\n 9 )\r\n---> 10 trainer.train()\r\n 11 output = \"/output/\"\r\n 12 #trainer.save_model(output + \"MT5-12-original-XLSUM-accuracy\")\r\n\r\nFile ~\\AppData\\Local\\anaconda3\\envs\\hface\\lib\\site-packages\\transformers\\trainer.py:1645, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)\r\n 1640 self.model_wrapped = self.model\r\n 1642 inner_training_loop = find_executable_batch_size(\r\n 1643 self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size\r\n 1644 )\r\n-> 1645 return inner_training_loop(\r\n 1646 args=args,\r\n 1647 resume_from_checkpoint=resume_from_checkpoint,\r\n 1648 trial=trial,\r\n 1649 ignore_keys_for_eval=ignore_keys_for_eval,\r\n 1650 )\r\n\r\nFile ~\\AppData\\Local\\anaconda3\\envs\\hface\\lib\\site-packages\\transformers\\trainer.py:2011, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)\r\n 2008 self.state.epoch = epoch + (step + 1 + steps_skipped) / steps_in_epoch\r\n 2009 self.control = self.callback_handler.on_step_end(args, self.state, self.control)\r\n-> 2011 self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)\r\n 2012 else:\r\n 2013 self.control = self.callback_handler.on_substep_end(args, self.state, self.control)\r\n\r\nFile ~\\AppData\\Local\\anaconda3\\envs\\hface\\lib\\site-packages\\transformers\\trainer.py:2312, in Trainer._maybe_log_save_evaluate(self, tr_loss, model, trial, epoch, ignore_keys_for_eval)\r\n 2310 metrics.update(dataset_metrics)\r\n 2311 else:\r\n-> 2312 metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)\r\n 2313 self._report_to_hp_search(trial, self.state.global_step, metrics)\r\n 2315 # Run delayed LR scheduler now that metrics are populated\r\n\r\nFile ~\\AppData\\Local\\anaconda3\\envs\\hface\\lib\\site-packages\\transformers\\trainer_seq2seq.py:159, in Seq2SeqTrainer.evaluate(self, eval_dataset, ignore_keys, metric_key_prefix, **gen_kwargs)\r\n 154 gen_kwargs[\"num_beams\"] = (\r\n 155 gen_kwargs[\"num_beams\"] if gen_kwargs.get(\"num_beams\") is not None else self.args.generation_num_beams\r\n 156 )\r\n 157 self._gen_kwargs = gen_kwargs\r\n--> 159 return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)\r\n\r\nFile ~\\AppData\\Local\\anaconda3\\envs\\hface\\lib\\site-packages\\transformers\\trainer.py:3043, in Trainer.evaluate(self, eval_dataset, ignore_keys, metric_key_prefix)\r\n 3040 start_time = time.time()\r\n 3042 eval_loop = self.prediction_loop if self.args.use_legacy_prediction_loop else self.evaluation_loop\r\n-> 3043 output = eval_loop(\r\n 3044 eval_dataloader,\r\n 3045 description=\"Evaluation\",\r\n 3046 # No point gathering the predictions if there are no metrics, otherwise we defer to\r\n 3047 # self.args.prediction_loss_only\r\n 3048 prediction_loss_only=True if self.compute_metrics is None else None,\r\n 3049 ignore_keys=ignore_keys,\r\n 3050 metric_key_prefix=metric_key_prefix,\r\n 3051 )\r\n 3053 total_batch_size = self.args.eval_batch_size * self.args.world_size\r\n 3054 if f\"{metric_key_prefix}_jit_compilation_time\" in output.metrics:\r\n\r\nFile ~\\AppData\\Local\\anaconda3\\envs\\hface\\lib\\site-packages\\transformers\\trainer.py:3235, in Trainer.evaluation_loop(self, dataloader, description, prediction_loss_only, ignore_keys, metric_key_prefix)\r\n 3232 batch_size = observed_batch_size\r\n 3234 # Prediction step\r\n-> 3235 loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)\r\n 3236 inputs_decode = self._prepare_input(inputs[\"input_ids\"]) if args.include_inputs_for_metrics else None\r\n 3238 if is_torch_tpu_available():\r\n\r\nFile ~\\AppData\\Local\\anaconda3\\envs\\hface\\lib\\site-packages\\transformers\\trainer_seq2seq.py:276, in Seq2SeqTrainer.prediction_step(self, model, inputs, prediction_loss_only, ignore_keys)\r\n 270 if (\r\n 271 \"labels\" in inputs\r\n 272 and \"decoder_input_ids\" in inputs\r\n 273 and inputs[\"labels\"].shape == inputs[\"decoder_input_ids\"].shape\r\n 274 ):\r\n 275 inputs = {k: v for k, v in inputs.items() if k != \"decoder_input_ids\"}\r\n--> 276 generated_tokens = self.model.generate(**inputs, **gen_kwargs)\r\n 278 # Temporary hack to ensure the generation config is not initialized for each iteration of the evaluation loop\r\n 279 # TODO: remove this hack when the legacy code that initializes generation_config from a model config is\r\n 280 # removed in https://github.com/huggingface/transformers/blob/98d88b23f54e5a23e741833f1e973fdf600cc2c5/src/transformers/generation/utils.py#L1183\r\n 281 if self.model.generation_config._from_model_config:\r\n\r\nFile ~\\AppData\\Local\\anaconda3\\envs\\hface\\lib\\site-packages\\torch\\autograd\\grad_mode.py:28, in _DecoratorContextManager.__call__.<locals>.decorate_context(*args, **kwargs)\r\n 25 @functools.wraps(func)\r\n 26 def decorate_context(*args, **kwargs):\r\n 27 with self.__class__():\r\n---> 28 return func(*args, **kwargs)\r\n\r\nFile ~\\AppData\\Local\\anaconda3\\envs\\hface\\lib\\site-packages\\transformers\\generation\\utils.py:1522, in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, assistant_model, streamer, **kwargs)\r\n 1516 raise ValueError(\r\n 1517 \"num_return_sequences has to be 1 when doing greedy search, \"\r\n 1518 f\"but is {generation_config.num_return_sequences}.\"\r\n 1519 )\r\n 1521 # 11. run greedy search\r\n-> 1522 return self.greedy_search(\r\n 1523 input_ids,\r\n 1524 logits_processor=logits_processor,\r\n 1525 stopping_criteria=stopping_criteria,\r\n 1526 pad_token_id=generation_config.pad_token_id,\r\n 1527 eos_token_id=generation_config.eos_token_id,\r\n 1528 output_scores=generation_config.output_scores,\r\n 1529 return_dict_in_generate=generation_config.return_dict_in_generate,\r\n 1530 synced_gpus=synced_gpus,\r\n 1531 streamer=streamer,\r\n 1532 **model_kwargs,\r\n 1533 )\r\n 1535 elif is_contrastive_search_gen_mode:\r\n 1536 if generation_config.num_return_sequences > 1:\r\n\r\nFile ~\\AppData\\Local\\anaconda3\\envs\\hface\\lib\\site-packages\\transformers\\generation\\utils.py:2339, in GenerationMixin.greedy_search(self, input_ids, logits_processor, stopping_criteria, max_length, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, synced_gpus, streamer, **model_kwargs)\r\n 2336 model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)\r\n 2338 # forward pass to get next token\r\n-> 2339 outputs = self(\r\n 2340 **model_inputs,\r\n 2341 return_dict=True,\r\n 2342 output_attentions=output_attentions,\r\n 2343 output_hidden_states=output_hidden_states,\r\n 2344 )\r\n 2346 if synced_gpus and this_peer_finished:\r\n 2347 continue # don't waste resources running the code we don't need\r\n\r\nFile ~\\AppData\\Local\\anaconda3\\envs\\hface\\lib\\site-packages\\torch\\nn\\modules\\module.py:1051, in Module._call_impl(self, *input, **kwargs)\r\n 1047 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1048 # this function, and just call forward.\r\n 1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\r\n 1050 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1051 return forward_call(*input, **kwargs)\r\n 1052 # Do not call functions when jit is used\r\n 1053 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\nFile ~\\AppData\\Local\\anaconda3\\envs\\hface\\lib\\site-packages\\transformers\\models\\mt5\\modeling_mt5.py:1753, in MT5ForConditionalGeneration.forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)\r\n 1750 decoder_attention_mask = decoder_attention_mask.to(self.decoder.first_device)\r\n 1752 # Decode\r\n-> 1753 decoder_outputs = self.decoder(\r\n 1754 input_ids=decoder_input_ids,\r\n 1755 attention_mask=decoder_attention_mask,\r\n 1756 inputs_embeds=decoder_inputs_embeds,\r\n 1757 past_key_values=past_key_values,\r\n 1758 encoder_hidden_states=hidden_states,\r\n 1759 encoder_attention_mask=attention_mask,\r\n 1760 head_mask=decoder_head_mask,\r\n 1761 cross_attn_head_mask=cross_attn_head_mask,\r\n 1762 use_cache=use_cache,\r\n 1763 output_attentions=output_attentions,\r\n 1764 output_hidden_states=output_hidden_states,\r\n 1765 return_dict=return_dict,\r\n 1766 )\r\n 1768 sequence_output = decoder_outputs[0]\r\n 1770 # Set device for model parallelism\r\n\r\nFile ~\\AppData\\Local\\anaconda3\\envs\\hface\\lib\\site-packages\\torch\\nn\\modules\\module.py:1051, in Module._call_impl(self, *input, **kwargs)\r\n 1047 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1048 # this function, and just call forward.\r\n 1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\r\n 1050 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1051 return forward_call(*input, **kwargs)\r\n 1052 # Do not call functions when jit is used\r\n 1053 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\nFile ~\\AppData\\Local\\anaconda3\\envs\\hface\\lib\\site-packages\\transformers\\models\\mt5\\modeling_mt5.py:1062, in MT5Stack.forward(self, input_ids, attention_mask, encoder_hidden_states, encoder_attention_mask, inputs_embeds, head_mask, cross_attn_head_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)\r\n 1049 layer_outputs = checkpoint(\r\n 1050 create_custom_forward(layer_module),\r\n 1051 hidden_states,\r\n (...)\r\n 1059 None, # past_key_value is always None with gradient checkpointing\r\n 1060 )\r\n 1061 else:\r\n-> 1062 layer_outputs = layer_module(\r\n 1063 hidden_states,\r\n 1064 attention_mask=extended_attention_mask,\r\n 1065 position_bias=position_bias,\r\n 1066 encoder_hidden_states=encoder_hidden_states,\r\n 1067 encoder_attention_mask=encoder_extended_attention_mask,\r\n 1068 encoder_decoder_position_bias=encoder_decoder_position_bias,\r\n 1069 layer_head_mask=layer_head_mask,\r\n 1070 cross_attn_layer_head_mask=cross_attn_layer_head_mask,\r\n 1071 past_key_value=past_key_value,\r\n 1072 use_cache=use_cache,\r\n 1073 output_attentions=output_attentions,\r\n 1074 )\r\n 1076 # layer_outputs is a tuple with:\r\n 1077 # hidden-states, key-value-states, (self-attention position bias), (self-attention weights), (cross-attention position bias), (cross-attention weights)\r\n 1078 if use_cache is False:\r\n\r\nFile ~\\AppData\\Local\\anaconda3\\envs\\hface\\lib\\site-packages\\torch\\nn\\modules\\module.py:1051, in Module._call_impl(self, *input, **kwargs)\r\n 1047 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1048 # this function, and just call forward.\r\n 1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\r\n 1050 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1051 return forward_call(*input, **kwargs)\r\n 1052 # Do not call functions when jit is used\r\n 1053 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\nFile ~\\AppData\\Local\\anaconda3\\envs\\hface\\lib\\site-packages\\transformers\\models\\mt5\\modeling_mt5.py:557, in MT5Block.forward(self, hidden_states, attention_mask, position_bias, encoder_hidden_states, encoder_attention_mask, encoder_decoder_position_bias, layer_head_mask, cross_attn_layer_head_mask, past_key_value, use_cache, output_attentions, return_dict)\r\n 554 else:\r\n 555 self_attn_past_key_value, cross_attn_past_key_value = None, None\r\n--> 557 self_attention_outputs = self.layer[0](\r\n 558 hidden_states,\r\n 559 attention_mask=attention_mask,\r\n 560 position_bias=position_bias,\r\n 561 layer_head_mask=layer_head_mask,\r\n 562 past_key_value=self_attn_past_key_value,\r\n 563 use_cache=use_cache,\r\n 564 output_attentions=output_attentions,\r\n 565 )\r\n 566 hidden_states, present_key_value_state = self_attention_outputs[:2]\r\n 567 attention_outputs = self_attention_outputs[2:] # Keep self-attention outputs and relative position weights\r\n\r\nFile ~\\AppData\\Local\\anaconda3\\envs\\hface\\lib\\site-packages\\torch\\nn\\modules\\module.py:1051, in Module._call_impl(self, *input, **kwargs)\r\n 1047 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1048 # this function, and just call forward.\r\n 1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\r\n 1050 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1051 return forward_call(*input, **kwargs)\r\n 1052 # Do not call functions when jit is used\r\n 1053 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\nFile ~\\AppData\\Local\\anaconda3\\envs\\hface\\lib\\site-packages\\transformers\\models\\mt5\\modeling_mt5.py:462, in MT5LayerSelfAttention.forward(self, hidden_states, attention_mask, position_bias, layer_head_mask, past_key_value, use_cache, output_attentions)\r\n 451 def forward(\r\n 452 self,\r\n 453 hidden_states,\r\n (...)\r\n 459 output_attentions=False,\r\n 460 ):\r\n 461 normed_hidden_states = self.layer_norm(hidden_states)\r\n--> 462 attention_output = self.SelfAttention(\r\n 463 normed_hidden_states,\r\n 464 mask=attention_mask,\r\n 465 position_bias=position_bias,\r\n 466 layer_head_mask=layer_head_mask,\r\n 467 past_key_value=past_key_value,\r\n 468 use_cache=use_cache,\r\n 469 output_attentions=output_attentions,\r\n 470 )\r\n 471 hidden_states = hidden_states + self.dropout(attention_output[0])\r\n 472 outputs = (hidden_states,) + attention_output[1:] # add attentions if we output them\r\n\r\nFile ~\\AppData\\Local\\anaconda3\\envs\\hface\\lib\\site-packages\\torch\\nn\\modules\\module.py:1051, in Module._call_impl(self, *input, **kwargs)\r\n 1047 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1048 # this function, and just call forward.\r\n 1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\r\n 1050 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1051 return forward_call(*input, **kwargs)\r\n 1052 # Do not call functions when jit is used\r\n 1053 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\nFile ~\\AppData\\Local\\anaconda3\\envs\\hface\\lib\\site-packages\\transformers\\models\\mt5\\modeling_mt5.py:420, in MT5Attention.forward(self, hidden_states, mask, key_value_states, position_bias, past_key_value, layer_head_mask, query_length, use_cache, output_attentions)\r\n 417 else:\r\n 418 position_bias_masked = position_bias\r\n--> 420 scores += position_bias_masked\r\n 421 attn_weights = nn.functional.softmax(scores.float(), dim=-1).type_as(\r\n 422 scores\r\n 423 ) # (batch_size, n_heads, seq_length, key_length)\r\n 424 attn_weights = nn.functional.dropout(\r\n 425 attn_weights, p=self.dropout, training=self.training\r\n 426 ) # (batch_size, n_heads, seq_length, key_length)\r\n\r\nRuntimeError: output with shape [4, 12, 1, 1] doesn't match the broadcast shape [4, 12, 1, 32]\r\n`\r\n```",
"Hey! I did not have time to check this, if you can isolate a small reproduction script (without all the training loop) would be great. Otherwise, I am investigating \r\n",
"Hi Arthur @ArthurZucker , the code that I shared initially is a small training loop without all the samples and could reproduce the error once run (the training size is set to be 16 and the evaluation set to be 8). The run time should take about 3 minutes top, because it has to download the CNNDailyMail dataset first. Thank a lot for your help!!",
"Ok, low on bandwidth so pinging @Rocketknight1 in case he can have a look! ",
"Sorry @hexie1995 did not have time to have look ๐ข ",
"I figured this one out! Making a PR.",
"@hexie1995 This should now be fixed on main! You can install from `main` with `pip install git+https://github.com/huggingface/transformers.git`. It will also be included in the next release, at which point you can go back to just `pip install transformers`.\r\n\r\nAnd thanks for the bug report - it turns out there really was an issue deep in the `transformers` code that was causing this!",
"Thank you! This is wonderful news. I will install the new one now. "
] | 1,688 | 1,699 | 1,697 |
NONE
| null |
### System Info
Hello,
I am using the latest version of transformers.
I have run into this issue recently and would like to receive some help on it. I am using the MT5 and "google/base" to finetune to my own dataset, while processing the data, I run into the issue where I keep getting error message of dimension not matching even after padding and truncation like suggested in the example:
I tried the exact same code with XLMProphetNet, XLM Roberta, XLNet, all worked. Only MT5 gives me this error message. This error almost always occur at the first step when the trainer is trying to evaluate on the validation data. I suspect this has somethign to do with the evaluation loop, but so far I have found nothing that could help me resolve this issue.
`RuntimeError: output with shape [4, 12, 1, 1] doesn't match the broadcast shape [4, 12, 1, 128]
`
@alexayalamcs tagging Alex here.
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoTokenizer, XLMProphetNetDecoder,DataCollatorWithPadding
from transformers import DataCollatorForLanguageModeling
from datasets import concatenate_datasets, load_dataset
from transformers import MT5ForConditionalGeneration, MT5Tokenizer, MT5Config, MT5Model,T5Tokenizer
import torch
from torch.utils.data import DataLoader
from transformers import Trainer
import nltk
import random
from accelerate import Accelerator
accelerator = Accelerator()
import datasets
rouge = datasets.load_metric("rouge")
import evaluate
accuracy_metric = evaluate.load("accuracy")
train = load_dataset("cnn_dailymail", "3.0.0", split = "train")
valid = load_dataset("cnn_dailymail", "3.0.0", split = "validation")
test = load_dataset("cnn_dailymail", "3.0.0", split = "test")
model = MT5ForConditionalGeneration.from_pretrained("google/mt5-base")
tokenizer = T5Tokenizer.from_pretrained("google/mt5-base")
encoder_max_length=512
decoder_max_length=128
def process_data_to_model_inputs(batch):
# tokenize the inputs and labels
inputs = tokenizer(batch["article"], padding="max_length",truncation=True, max_length=encoder_max_length)
outputs = tokenizer(batch["highlights"],padding="max_length", truncation=True, max_length=decoder_max_length)
batch["input_ids"] = inputs.input_ids
batch["attention_mask"] = inputs.attention_mask
batch["decoder_input_ids"] = outputs.input_ids
batch["decoder_attention_mask"] = outputs.attention_mask
batch["labels"] = outputs.input_ids.copy()
return batch
train_data = train.select(range(16))
#train_data = train_init
#batch_size = 16
batch_size=4
train_data = train_data.map(
process_data_to_model_inputs,
batched=True,
batch_size=batch_size,
remove_columns=["article", "highlights", "id"]
)
train_data.set_format(
type="torch", columns=["input_ids", "attention_mask", "decoder_input_ids", "decoder_attention_mask", "labels"],
)
val_data = valid.select(range(8))
#val_data = valid
val_data = val_data.map(
process_data_to_model_inputs,
batched=True,
batch_size=batch_size,
remove_columns=["article", "highlights", "id"]
)
val_data.set_format(
type="torch", columns=["input_ids", "attention_mask", "decoder_input_ids", "decoder_attention_mask", "labels"],
)
from transformers import Seq2SeqTrainer,Seq2SeqTrainingArguments
training_args = Seq2SeqTrainingArguments(
predict_with_generate=True,
num_train_epochs = 3,
evaluation_strategy="steps",
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
fp16=False,
output_dir="./",
logging_steps=2,
#save_steps=5000,
eval_steps=2,
# logging_steps=1000,
# save_steps=500,
# eval_steps=7500,
# warmup_steps=2000,
# save_total_limit=3,
)
def compute_metrics(pred):
labels_ids = pred.label_ids
pred_ids = pred.predictions
pred_str = tokenizer.batch_decode(pred_ids)
label_str = tokenizer.batch_decode(labels_ids)
rouge_output = rouge.compute(predictions=pred_str, references=label_str, rouge_types=["rouge2"])["rouge2"].mid
return {
"rouge2_precision": round(rouge_output.precision, 4),
"rouge2_recall": round(rouge_output.recall, 4),
"rouge2_fmeasure": round(rouge_output.fmeasure, 4),
}
data_collator = DataCollatorWithPadding(tokenizer=tokenizer)
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=train_data,
eval_dataset=val_data,
)
trainer.train()
```
### Expected behavior
I would expect this to run through just fine like XLMPropheNet, XLM Roberta, and XLNet, but it does not.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24567/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24567/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24566
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24566/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24566/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24566/events
|
https://github.com/huggingface/transformers/pull/24566
| 1,780,115,288 |
PR_kwDOCUB6oc5UNFIc
| 24,566 |
Update some torchscript tests after #24505
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger Thanks for the suggestion. I actually copied from the common test file. However, there is \r\n\r\n```python\r\n model_buffers = list(model.buffers())\r\n for non_persistent_buffer in non_persistent_buffers.values():\r\n found_buffer = False\r\n for i, model_buffer in enumerate(model_buffers):\r\n if torch.equal(non_persistent_buffer, model_buffer):\r\n found_buffer = True\r\n break\r\n\r\n self.assertTrue(found_buffer)\r\n model_buffers.pop(i)\r\n```\r\nthere (which I didn't see when working on this PR). This uses the `values`.\r\n\r\nSo I am going to copy the above block (and therefore keeping using dict). Let me know if you have other opinions instead.",
"If you end up using the values, no worries.",
"Copy the mentioned block to all `_create_and_check_torchscript` definition.\r\n\r\nLet's not change this by looking if the model use persistent or non-persistent buffuer: just keep the logic in the common test file."
] | 1,688 | 1,688 | 1,688 |
COLLABORATOR
| null |
# What does this PR do?
Need to update the logic in some torchscript tests after #24505
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24566/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24566/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24566",
"html_url": "https://github.com/huggingface/transformers/pull/24566",
"diff_url": "https://github.com/huggingface/transformers/pull/24566.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24566.patch",
"merged_at": 1688047524000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24565
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24565/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24565/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24565/events
|
https://github.com/huggingface/transformers/pull/24565
| 1,780,000,805 |
PR_kwDOCUB6oc5UMsWU
| 24,565 |
โ ๏ธโ ๏ธ[`T5Tokenize`] Fix T5 family tokenizersโ ๏ธโ ๏ธ
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Actually switch t5 tests have to be updated! \r\nThis means I have to check if the models were trained with this extra token (if they used HF tokenizer) or not.\r\n- [x] `tests.models.instructblip.test_modeling_instructblip.InstructBlipModelIntegrationTest testMethod=test_inference_flant5_xl` failing on `main` too so not related.....\r\n<img width=\"1560\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/48595927/529a1cdf-6907-42c3-9c48-1d2a6177c8e6\">\r\n\r\n- [x] `tests.models.mt5.test_modeling_flax_mt5.MT5IntegrationTest` also fails on main...\r\n\r\n\r\n- [x] `tests/models/t5/test_tokenization_t5.py` the issue comes from the `convert_slow` modification. Need to investigate\r\n - [ ] tests/models/t5/test_tokenization_t5.py:399 T5TokenizationTest.test_get_sentinel_token_ids_for_fasttokenizer\r\n - [ ] tests/test_tokenization_common.py:3425 T5TokenizationTest.test_save_pretrained\r\n - [ ] tests/models/t5/test_tokenization_t5.py:271 T5TokenizationTest.test_special_tokens_initialization",
"This can also be made non \"breakable\" with a flag. Up to debate since it is a bug fix.",
"Edit: just to make sure, I did more testing and unfortunately , there is one bug: \r\n```python \r\n>>>tokenizer.tokenize(\"Hello <extra_id_0>\")\r\n['_', '_Hello', '<extra_id_0>']\r\n``` \r\ninstead of \r\n```python \r\n>>>tokenizer.tokenize(\"Hello <extra_id_0>\")\r\n['_Hello', '<extra_id_0>']\r\n``` \r\nThis is because we have to prepend a `_` instead of a space. (`text = SPIECE_UNDERLINE + text`. Not a single test caught this when runing `pytest tests -k t5` which is interesting. \r\nFixing asap and adding tests. This is becoming very complex ๐ ",
"I'm getting this legacy behaviour warning come up when simply loading a T5 tokenizer - it appears even before using the tokenizer. Is there an updated way to load the tokenizer? The warning appears when running the following lines of code:\r\n\r\nfrom transformers import AutoTokenizer\r\ntokeniser = AutoTokenizer.from_pretrained(\"google/mt5-small\")\r\n\r\nThe error is:\r\nYou are using the legacy behaviour of the <class 'transformers.models.t5.tokenization_t5.T5Tokenizer'>. This means that tokens that come after special tokens will not be properly handled. We recommend you to read the related pull request available at https://github.com/huggingface/transformers/pull/24565\r\n/usr/local/lib/python3.10/dist-packages/transformers/convert_slow_tokenizer.py:470: UserWarning: The sentencepiece tokenizer that you are converting to a fast tokenizer uses the byte fallback option which is not implemented in the fast tokenizers. In practice this means that the fast version of the tokenizer can produce unknown tokens whereas the sentencepiece version would have converted these unknown tokens into a sequence of byte tokens matching the original piece of text.\r\n warnings.warn(",
"Yep, just set `legacy=False`. The goal of the warning is for you to decide wether or not you thing the legacy behaviour is alright with you or not. ",
"so tokenizers just have to be loaded with `legacy=False` forever now? Seems like an odd choice.",
"Since this is a breaking change, the next version of transformers will probably have `legacy=False` by default, and remove the warning. We don't really have much choice in order to preserve backward compatibility ๐
",
"+1 for this proposal, since my users are asking why this error shows up for them and if the incorrect behavior is the default that is not desirable for anyone just wanting to use a model. People needing the legacy behavior can then opt-in. Hardcoding legacy=False in my own code feels wrong to me.",
"I am also a little confused when I use LLaMA tokenizer. Is it wrong for LLaMA?",
"For LlaMa, it is a bit special. The original tokenizer does not have the eos and bos as part of the special tokens This means that they are parsed (split) by the model, while this does not happen with transformers.\r\nI am testing to make sure whether this should apply for llama based on llama2. Whatever the case, if you don't use legacy, and a special token is in the middle of a sequence:\r\n- `Hey how are you?</s> I'm fine and you`'s encoding will be wrong for two reasons. 1, we split special tokens. 2, an extra space will be added between `</s> I`. Using `legacy=False` the extra space is not added.\r\n- `<s>Hey how are you?` will have a different encoding because llama would tokenize it as [`<`, `s`, `>`, `Hey`] while we would tokenizer it as [`<s>`, `_Hey`] note the extra `_`. With `legacy=False` we will have [`<s>`, `Hey`] which is already better. I am implementing now the possibility to split special tokens (meaning ignore that they are special) which will bridge the final gap to allow us to have [`<`, `s`, `>`, `Hey`].\r\n\r\nAn issue can come up if:\r\n- You manualy add the eos at the beginning using ` tokenizer.encode(\"<s>Hey\")` the `tokenizer.sp_model` only sees `Hey` which get's encoded as [`_Hey`], but the `_` is removed by `legacy=False`. This is not what you want in this specific case because the special token is placed at the start. With #25081, the special tokens should be split by default for llama, which will make things right! ",
"So if text does not explicitly contain special tokens, the tokenizer can work well?",
"and so how does one continue using the legacy mode, but remove the warning?\r\n\r\nSurely if the model designer wants the legacy mode they should be able to select that w/o having the user puzzle over the warning.",
"Ok will make it default to None, that way you can be warned only if you just have not set the argument! Thanks for the feedback ",
"that's a great idea, Arthur - thank you!",
"See #25131 ๐ ",
"- I just tested the transformers=4.31.0, however, the bug is still there, shown as following:\r\n\r\n```\r\n>>> btokenizer = transformers.AutoTokenizer.from_pretrained(model_path,legacy=False) \r\n>>> btokenizer.decode(btokenizer(\"Hey how are you?</s> I'm fine and you\")['input_ids'])\r\n<s> Hey how are you?</s> I'm fine and you\r\n```\r\n\r\nNote that there are two spaces between` </s> and I`.\r\n\r\n- Another issue that may require more clarification that should a token at the start of a string, with the current tokenizer, be added with a prefix space, no matter with or without a special token in front of it? \r\n\r\n- In transformers of LLama , it says \r\n\r\n> The LLaMA tokenizer is a BPE model based on [sentencepiece](https://github.com/google/sentencepiece). One quirk of sentencepiece is that when decoding a sequence, if the first token is the start of the word (e.g. โBananaโ), the tokenizer does not prepend the prefix space to the string.\r\n\r\nIt is confusing how can the first token is not a start of a word?\r\n\r\n\r\n\r\n",
"Hey! A few answers:\r\n- you are using the fast version, which was never mentioned to be fixed yet.\r\n- you are comparing the decoded outputs, which were not touched either. \r\n- I don't understand your question. My recommendation is to wait a little bit until we fix the default stripping mecanism with #23909, then will adresse the final issues with Llama. Decoding issue was mentioned here #25073. The first token is always a start of a word, but when you are the first token after a special token, you are just a token, not the first one ๐ ",
"@ArthurZucker You can fix whatever possible now, but many people have finetuned v2 models without using `legacy=False`, so that's not really fair to put on the user. There should have been a strict error to prevent this, not a warning as most people ignore, and while I appreciate what the HF team does, this was handled very poorly. ",
"I hope the above can be resolved in a patch release, I had expected that earlier. Waiting until 4.32 while the default is causing issues in real world usage seems to long, if we have 4.31.1 with legacy=false by default it solves the issues.",
"The released llama2 models on the hub were all using `legacy = False`, so if they were not using the original tokenizer, not entirely sure what we could have done better. A lot of people trained `llama1` with `legacy=False` and use the `LlamaTokenizer` for other places, we cannot have a breaking change raising an error this way. \r\nMore over, a lot of people use the `fast` version of the tokenizer, which was not changed either. \r\n\r\nOur goal is not to default to `legacy=False` but leave users choose the best solution for them, thus a warning. We are now defaulting to `True` if the `legacy` parameter was not set, with a warning. If you want to suggest maybe improvements for the warning I am all ears! We can maybe make it more visible? \r\n\r\n\r\nNote that the decoding is the same regardless of `legacy`, and `legacy` fixes only affects slow tokenizers.\r\n\r\n```python \r\n>>> from transformers import AutoTokenizers\r\n>>> btokenizer = AutoTokenizer.from_pretrained(\"meta-llama/Llama-2-7b-hf\",legacy=True)\r\n>>> btokenizer.decode(btokenizer(\"Hey how are you?</s> I'm fine and you\")['input_ids'])\r\n\"<s> Hey how are you?</s> I'm fine and you\"\r\n```\r\n\r\n@official-elinas if they finetuned the model, then they can keep using `legacy=False`, since the model learned that there is an extra space after the special tokens. There are a lot of applications, but if you added a new token like `<bot>`, then anyway the model learns a new concepts, will learn at the same time that there is an extra space afterwards. \r\n\r\nAnyway I'm also really sorry there was a hidden bug, and the first Llama release was also very messy, I should have checked more than twice! Thanks for your feedbacks ๐ค \r\n",
"Just to give two cents here. I run my tests with both fast and slow tokenizers and I found that using legacy=True leads to more inconsistent behavior between the two. In the examples below the slow tokenizer yields `unk` when the fast one does not. Interestingly, I would expect the legacy one to be broken, _not_ the non-legacy one. I do not quite understand why the legacy one seems to work fine but the new one does not, so I will stick with the legacy behavior.\r\n\r\n```\r\nfrom transformers import AutoTokenizer\r\n\r\n# LEGACY\r\n########\r\ns = \"</s> quiet\" # </s> is a special token\r\ntokenizer_slow = AutoTokenizer.from_pretrained(\"google/mt5-base\", use_fast=False, legacy=True)\r\ntokenizer_fast = AutoTokenizer.from_pretrained(\"google/mt5-base\", use_fast=True, legacy=True)\r\n\r\nprint(tokenizer_slow.decode(tokenizer_slow(s).input_ids))\r\n# </s> quiet</s>\r\nprint(tokenizer_fast.decode(tokenizer_fast(s).input_ids))\r\n# </s> quiet</s>\r\n\r\n# Without space\r\ns = \"</s>quiet\" # </s> is a special token\r\ntokenizer_slow = AutoTokenizer.from_pretrained(\"google/mt5-base\", use_fast=False, legacy=True)\r\ntokenizer_fast = AutoTokenizer.from_pretrained(\"google/mt5-base\", use_fast=True, legacy=True)\r\n\r\nprint(tokenizer_slow.decode(tokenizer_slow(s).input_ids))\r\n# </s> quiet</s>\r\nprint(tokenizer_fast.decode(tokenizer_fast(s).input_ids))\r\n# </s> quiet</s>\r\n\r\n# NOT LEGACY\r\n############\r\ns = \"</s> quiet\" # </s> is a special token\r\ntokenizer_slow = AutoTokenizer.from_pretrained(\"google/mt5-base\", use_fast=False, legacy=False)\r\ntokenizer_fast = AutoTokenizer.from_pretrained(\"google/mt5-base\", use_fast=True, legacy=False)\r\n\r\nprint(tokenizer_slow.decode(tokenizer_slow(s).input_ids))\r\n# </s><unk></s>\r\nprint(tokenizer_fast.decode(tokenizer_fast(s).input_ids))\r\n# </s> quiet</s>\r\n\r\n# Without space\r\ns = \"</s>quiet\" # </s> is a special token\r\ntokenizer_slow = AutoTokenizer.from_pretrained(\"google/mt5-base\", use_fast=False, legacy=False)\r\ntokenizer_fast = AutoTokenizer.from_pretrained(\"google/mt5-base\", use_fast=True, legacy=False)\r\n\r\nprint(tokenizer_slow.decode(tokenizer_slow(s).input_ids))\r\n# </s><unk></s>\r\nprint(tokenizer_fast.decode(tokenizer_fast(s).input_ids))\r\n# </s> quiet</s>\r\n\r\n```",
"Yes yes, if you look at #25224 will fix the unknown. Reported in #25176. It's a bit of a mess, but should all be fixed now! Pretty deep bug indeed",
"Hey @ArthurZucker -- so if I want to add a new token to the LLamaTokenizer like 'your_token', should I add the token to the sentencepiece model [#25224](https://github.com/huggingface/transformers/pull/25224) and then load the tokenizer using huggingface? Do I have to change the tokenizer.json/special_tokens_map.json/tokenizer_config.json of the LLamaTokenizer as well and how should I change them? \r\n\r\nCurrently using huggingface version 4.31.0.",
"It depends on the expected behaviour. ( simply put, if you want an extra space or not. But should not make a difference overall in results)\r\nIf you train the model, it won't make a difference (intuition mostly but you are either teaching him that `<new_token>` always has a space after it or not).\r\n",
"After cloning the repository for the ProtT5 model (Embedding/PyTorch/Advanced/ProtT5-XL-UniRef50.ipynb) and running the code with the exact same set of inputs as seen in the original, we are getting a different output. The numbers are varying from what would be expected given that we didn't change anything. Does anybody know why this might be happening?\r\n\r\nWe are also receiving an error that mentions the Legacy behavior, and are not sure of the significance.",
"The warning is just here to tell you to chose whether you want the legacy behaviour or not. Then you save the model and can use it without the warning. If the model was trained with the previous legacy behaviour, you should probably use it too.",
"@ArthurZucker I already set legacy = True load llama2-7b-chat , but not lucky , also get this warning \r\n\r\n\r\ntransformers==4.31.0\r\nmy code:\r\ntokenizer = AutoTokenizer.from_pretrained(model_name,legcy=False,use_fast=False)\r\n\r\n\r\nwarning:\r\n\r\nYou are using the legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This means that tokens that come after special tokens will not be properly handled. We recommend you to read the related pull request available at https://github.com/huggingface/transformers/pull/24565",
"The code you copy pasted has a type:\r\n`tokenizer = AutoTokenizer.from_pretrained(model_name,legcy=False,use_fast=False)` shoud be \r\n`tokenizer = AutoTokenizer.from_pretrained(model_name,legacy=False,use_fast=False)`",
"when i use api there is some warning: \r\nYou are using the legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This means that tokens that come after special tokens will not be properly handled. \r\nand \r\n requests.exceptions.ConnectionError: HTTPSConnectionPool(host='cdn-lfs.huggingface.co', port=443): Read timed out.\r\nplease tell me what should i do,thank you"
] | 1,688 | 1,695 | 1,688 |
COLLABORATOR
| null |
# What does this PR do?
Fixes the `T5Tokenizer` (not the fast one yet). (at the same time adresses part of https://github.com/huggingface/transformers/issues/11531)
When converting `UMT5` I created a reproduction snippet for any t5x model form the original repo. I realized that a very very small variation in the input completely changes the output for non-finetuned models. The issue lies with the way we process `<extra_id_xx>`.
Example:
```python
# t5-base tokenizer
>>> tokenizer.encode("<extra_id_0>. Hello", add_special_tokens = False)
[32099, 3, 5, 8774] # ['<extra_id_0>', ' โ', '.', 'โHello']
# seqio.SentencePieceVocabulary(vocab_path, extra_ids = 300)
>>> processor.encode("<extra_id_0>. Hello")
[32099, 5, 8774] # ['<extra_id_0>', '.', 'โHello']
#after fix:
>>> tokenizer.encode("<extra_id_0>. Hello", add_special_tokens = False)
[32099, 5, 8774] # ['<extra_id_0>', '.', 'โHello']
```
The reason is that t5x wrapps arround `sentencepiece`, and [adds the extra id to the vocab](https://github.com/google/seqio/blob/4d3097973e9e24ec2963319ec3c5ff518811060f/seqio/vocabularies.py#L362), but they are not saved that way.
We don't add them to the vocab, so when we tokenize, we split on special tokens, thus the sentencepiece model only sees:
```python
>>> tokenizer.sp_model.encode(". Hello")
[273, 274, 9]
```
While the original model never sees a `.` (or a lot of other characters) alone, and thus we add an extra space...
This is a bug fix with regards to training, it is **breaking** in the sense that is should remove the space.
TODO:
- [x] Extra checks should be added to make sure this does not add anything else (like stripping a ` `. This for example would break: ` tokenizer.encode(". Hello")` as it remove the prefix space that is normally added.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24565/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24565/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24565",
"html_url": "https://github.com/huggingface/transformers/pull/24565",
"diff_url": "https://github.com/huggingface/transformers/pull/24565.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24565.patch",
"merged_at": 1688101244000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24564
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24564/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24564/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24564/events
|
https://github.com/huggingface/transformers/issues/24564
| 1,779,920,514 |
I_kwDOCUB6oc5qF26C
| 24,564 |
InstructBlipProcessor not working with load_in_4bit and load_in_8bit
|
{
"login": "fraferra",
"id": 5224147,
"node_id": "MDQ6VXNlcjUyMjQxNDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5224147?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fraferra",
"html_url": "https://github.com/fraferra",
"followers_url": "https://api.github.com/users/fraferra/followers",
"following_url": "https://api.github.com/users/fraferra/following{/other_user}",
"gists_url": "https://api.github.com/users/fraferra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fraferra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fraferra/subscriptions",
"organizations_url": "https://api.github.com/users/fraferra/orgs",
"repos_url": "https://api.github.com/users/fraferra/repos",
"events_url": "https://api.github.com/users/fraferra/events{/privacy}",
"received_events_url": "https://api.github.com/users/fraferra/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @younesbelkada ",
"Hi @fraferra \r\nIn https://github.com/huggingface/transformers/pull/24555 I have fixed the a silent issue with processors that you are currently facing, can you try to install transformers from source and run:\r\n\r\n```python\r\nfrom transformers import InstructBlipProcessor, InstructBlipForConditionalGeneration\r\nimport torch\r\nfrom PIL import Image\r\n\r\ntorch.cuda.empty_cache()\r\nmodel = InstructBlipForConditionalGeneration.from_pretrained(\"Salesforce/instructblip-vicuna-7b\", load_in_4bit=True, torch_dtype=torch.bfloat16)\r\nprocessor = InstructBlipProcessor.from_pretrained(\"Salesforce/instructblip-vicuna-7b\", load_in_4bit=True, torch_dtype=torch.bfloat16)\r\n\r\nimage = Image.open('examples/test1.jpeg')\r\ninputs = processor(images=image, text='', return_tensors=\"pt\").to(device, torch.bfloat16)\r\noutputs = model.generate(\r\n **inputs,\r\n do_sample=False,\r\n num_beams=5,\r\n max_length=256,\r\n min_length=1,\r\n top_p=0.9,\r\n repetition_penalty=1.5,\r\n length_penalty=1.0,\r\n temperature=1,\r\n)\r\n```\r\nCheck a more concrete example here: https://github.com/huggingface/transformers/blob/66954ea25e342fd451c26ec1c295da0b8692086b/tests/models/instructblip/test_modeling_instructblip.py#L524",
"Thank you @younesbelkada for looking into it! Is it possible that `BatchEncoding.to()` needs to be updated?\r\nI can see in the source code that `BatchEncoding.to()` (https://github.com/huggingface/transformers/blob/9e28750287df57942d716083ae53bb4e766104c2/src/transformers/tokenization_utils_base.py#L756) only takes 1 argument.\r\n\r\nI am getting the following error when trying to run your code snippet:\r\n```\r\n 1 device = \"cuda\" if torch.cuda.is_available() else \"cpu\"\r\n 2 image = Image.open('examples/test1.jpeg')\r\n----> 3 inputs = processor(images=image, text='', return_tensors=\"pt\").to(device, torch.bfloat16)\r\n 4 outputs = model.generate(\r\n 5 **inputs,\r\n 6 do_sample=False,\r\n (...)\r\n 13 temperature=1,\r\n 14 )\r\n\r\nTypeError: BatchEncoding.to() takes 2 positional arguments but 3 were given\r\n```",
"Pretty weird since `InstructBlipProcessor.__call__` should return `BatchFeature` which the `to` method can take `*args, **kwargs` unlike the one from `BatchEncoding` which only takes `device` as an argument.",
"Hi @fraferra \r\nCan you install transformers from the main branch and try again?\r\n```\r\npip uninstall transformers\r\npip install git+https://github.com/huggingface/transformers.git\r\n```",
"@younesbelkada it worked, thank you! for some reason it wouldnt update to the latest transformers' version in the conda env. After uninstalling it and reinstalling it after it returned `BatchFeature`",
"Thank you @fraferra feel free to close the issue ! Let us know if you have more questions",
"as the vision is a model, and uses the llm model vicunia or optB7 etc... would there be a way to just use the already loaded model to ask a txt question and get a text answer irrelevant to the image? just use llm as an llm?"
] | 1,687 | 1,690 | 1,688 |
NONE
| null |
### System Info
transformers @ git+https://github.com/huggingface/transformers@68c92981ff2b804979d2e6107eeefe298d1e5183
Python 3.11.4
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.47.03 Driver Version: 510.47.03 CUDA Version: 11.6 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA A100-SXM... Off | 00000000:00:04.0 Off | 0 |
| N/A 36C P0 50W / 400W | 845MiB / 40960MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 12365 C ...nda/envs/myenv/bin/python 843MiB |
+-----------------------------------------------------------------------------+
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Currently trying to run the following script:
```
from transformers import InstructBlipProcessor, InstructBlipForConditionalGeneration
import torch
from PIL import Image
torch.cuda.empty_cache()
model = InstructBlipForConditionalGeneration.from_pretrained("Salesforce/instructblip-vicuna-7b", load_in_4bit=True, torch_dtype=torch.bfloat16)
processor = InstructBlipProcessor.from_pretrained("Salesforce/instructblip-vicuna-7b", load_in_4bit=True, torch_dtype=torch.bfloat16)
image = Image.open('examples/test1.jpeg')
inputs = processor(images=image, text='', return_tensors="pt").to(device)
outputs = model.generate(
**inputs,
do_sample=False,
num_beams=5,
max_length=256,
min_length=1,
top_p=0.9,
repetition_penalty=1.5,
length_penalty=1.0,
temperature=1,
)
```
But obtaining the following error (see below). Is it possible that InstructBlipForConditionalGeneration does not support yet `load_in_4bit`?
Error logs:
```
RuntimeError Traceback (most recent call last)
Cell In[6], line 1
----> 1 outputs = model.generate(
2 **inputs,
3 do_sample=False,
4 num_beams=5,
5 max_length=256,
6 min_length=1,
7 top_p=0.9,
8 repetition_penalty=1.5,
9 length_penalty=1.0,
10 temperature=1,
11 )
File /opt/conda/envs/myenv/lib/python3.11/site-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs)
112 @functools.wraps(func)
113 def decorate_context(*args, **kwargs):
114 with ctx_factory():
--> 115 return func(*args, **kwargs)
File /opt/conda/envs/myenv/lib/python3.11/site-packages/transformers/models/instructblip/modeling_instructblip.py:1517, in InstructBlipForConditionalGeneration.generate(self, pixel_values, qformer_input_ids, qformer_attention_mask, input_ids, attention_mask, **generate_kwargs)
1514 self._preprocess_accelerate()
1516 batch_size = pixel_values.shape[0]
-> 1517 image_embeds = self.vision_model(pixel_values, return_dict=True).last_hidden_state
1519 image_attention_mask = torch.ones(image_embeds.size()[:-1], dtype=torch.long, device=image_embeds.device)
1521 query_tokens = self.query_tokens.expand(image_embeds.shape[0], -1, -1)
File /opt/conda/envs/myenv/lib/python3.11/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File /opt/conda/envs/myenv/lib/python3.11/site-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs)
163 output = old_forward(*args, **kwargs)
164 else:
--> 165 output = old_forward(*args, **kwargs)
166 return module._hf_hook.post_forward(module, output)
File /opt/conda/envs/myenv/lib/python3.11/site-packages/transformers/models/instructblip/modeling_instructblip.py:538, in InstructBlipVisionModel.forward(self, pixel_values, output_attentions, output_hidden_states, return_dict)
535 if pixel_values is None:
536 raise ValueError("You have to specify pixel_values")
--> 538 hidden_states = self.embeddings(pixel_values)
540 encoder_outputs = self.encoder(
541 inputs_embeds=hidden_states,
542 output_attentions=output_attentions,
543 output_hidden_states=output_hidden_states,
544 return_dict=return_dict,
545 )
547 last_hidden_state = encoder_outputs[0]
File /opt/conda/envs/myenv/lib/python3.11/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File /opt/conda/envs/myenv/lib/python3.11/site-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs)
163 output = old_forward(*args, **kwargs)
164 else:
--> 165 output = old_forward(*args, **kwargs)
166 return module._hf_hook.post_forward(module, output)
File /opt/conda/envs/myenv/lib/python3.11/site-packages/transformers/models/instructblip/modeling_instructblip.py:113, in InstructBlipVisionEmbeddings.forward(self, pixel_values)
111 batch_size = pixel_values.shape[0]
112 target_dtype = self.patch_embedding.weight.dtype
--> 113 patch_embeds = self.patch_embedding(pixel_values) # shape = [*, width, grid, grid]
114 patch_embeds = patch_embeds.flatten(2).transpose(1, 2)
116 class_embeds = self.class_embedding.expand(batch_size, 1, -1).to(target_dtype)
File /opt/conda/envs/myenv/lib/python3.11/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File /opt/conda/envs/myenv/lib/python3.11/site-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs)
163 output = old_forward(*args, **kwargs)
164 else:
--> 165 output = old_forward(*args, **kwargs)
166 return module._hf_hook.post_forward(module, output)
File /opt/conda/envs/myenv/lib/python3.11/site-packages/torch/nn/modules/conv.py:463, in Conv2d.forward(self, input)
462 def forward(self, input: Tensor) -> Tensor:
--> 463 return self._conv_forward(input, self.weight, self.bias)
File /opt/conda/envs/myenv/lib/python3.11/site-packages/torch/nn/modules/conv.py:459, in Conv2d._conv_forward(self, input, weight, bias)
455 if self.padding_mode != 'zeros':
456 return F.conv2d(F.pad(input, self._reversed_padding_repeated_twice, mode=self.padding_mode),
457 weight, bias, self.stride,
458 _pair(0), self.dilation, self.groups)
--> 459 return F.conv2d(input, weight, bias, self.stride,
460 self.padding, self.dilation, self.groups)
RuntimeError: Input type (float) and bias type (c10::BFloat16) should be the same
```
### Expected behavior
Produce output string as expected
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24564/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24564/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24563
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24563/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24563/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24563/events
|
https://github.com/huggingface/transformers/issues/24563
| 1,779,901,720 |
I_kwDOCUB6oc5qFyUY
| 24,563 |
Dataset features disappear after initlizing Trainer
|
{
"login": "lxrswdd",
"id": 35613337,
"node_id": "MDQ6VXNlcjM1NjEzMzM3",
"avatar_url": "https://avatars.githubusercontent.com/u/35613337?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lxrswdd",
"html_url": "https://github.com/lxrswdd",
"followers_url": "https://api.github.com/users/lxrswdd/followers",
"following_url": "https://api.github.com/users/lxrswdd/following{/other_user}",
"gists_url": "https://api.github.com/users/lxrswdd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lxrswdd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lxrswdd/subscriptions",
"organizations_url": "https://api.github.com/users/lxrswdd/orgs",
"repos_url": "https://api.github.com/users/lxrswdd/repos",
"events_url": "https://api.github.com/users/lxrswdd/events{/privacy}",
"received_events_url": "https://api.github.com/users/lxrswdd/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Yes, the `Trainer` removes any inputs not accepted by your model our your model won't be able to do a forward pass. You can remove (at your own risk) this by setting [`remove_unused_coumns`](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments.remove_unused_columns) in your `TrainingArguments` to `False`.",
"@sgugger Thank you for replying.\r\n\r\nSo I am facing this problem during model testing, the `do_predict` part. \r\nI still need the feature information from the dataset before the Trainer is initialized. For instance, the file name.\r\nSo based on your answer above, I am thinking of deep copying the dataset so I can loop through the identical dataset by index to get the information I need while feeding the `val_dataset` to the Trainer.\r\n\r\nI am new to Huggingface, may I know what's the conventional way to do so?\r\n\r\n```\r\n\r\n trainer = CTCTrainer(\r\n model=model,\r\n data_collator=data_collator,\r\n args=training_args,\r\n compute_metrics=compute_metrics,\r\n train_dataset=train_dataset,\r\n eval_dataset=val_dataset,\r\n tokenizer=processor.feature_extractor,\r\n )\r\n\r\n # print(val_dataset_original[0]['file'])\r\n # print('my test2----------------------------------') \r\n\r\n if last_checkpoint is not None:\r\n checkpoint = last_checkpoint\r\n elif model_args.model_name_or_path is not None and os.path.isdir(model_args.model_name_or_path):\r\n checkpoint = model_args.model_name_or_path\r\n else:\r\n checkpoint = None\r\n\r\n if training_args.do_train:\r\n trainer.train(resume_from_checkpoint=checkpoint)\r\n trainer.save_model() \r\n\r\n if training_args.do_predict:\r\n logger.info('******* Predict ********')\r\n\r\n data_collator.audio_only=True\r\n predictions, labels, metrics = trainer.predict(val_dataset, metric_key_prefix=\"predict\")\r\n logits_ctc, logits_cls = predictions\r\n pred_ids = np.argmax(logits_cls, axis=-1)\r\n pred_probs = F.softmax(torch.from_numpy(logits_cls).float(), dim=-1)\r\n print(val_dataset)\r\n with open(data_args.output_file, 'w') as f:\r\n for i in range(len(pred_ids)):\r\n f.write(val_dataset[i]['file'].split(\"/\")[-1] + \" \" + str(len(val_dataset[i]['input_values'])/16000) + \" \")\r\n pred = pred_ids[i]\r\n f.write(str(pred)+' ')\r\n for j in range(4):\r\n f.write(' ' + str(pred_probs[i][j].item()))\r\n f.write('\\n')\r\n f.close()\r\n```",
"Hey @lxrswdd - I see the `_prepare_inputs` method that you've overridden in the Trainer class is purely to get your dataset in the right format for the model\r\n\r\nWhat you're probably better off doing here is pre-processing your dataset ahead of time, transforming the raw audio values to normalised model input values using an appropriate feature extractor. You can do this quite straightforwardly using ๐ค Datasets's `.map` method\r\n\r\nOnce you have your pre-processed input values, you can collate them into _batches_ by defining an appropriate data collator. We have several end-to-end examples that will perform the pre-processing a collate steps for you: all you need to do is switch the dataset id for your dataset on the Hub. See [examples/speech-recognition](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition#connectionist-temporal-classification) for details\r\n\r\nLikewise, you can follow this excellent blog post for fine-tuning a CTC system with the ๐ค Trainer API: https://huggingface.co/blog/fine-tune-wav2vec2-english\r\n\r\nThe only real engineering work you'll have to do if you follow these guides is getting your dataset in the right format, for which you can follow this page: https://huggingface.co/docs/datasets/audio_dataset",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,687 | 1,691 | 1,691 |
NONE
| null |
### System Info
- `transformers` version: 4.4.2
- Platform: Linux-5.19.0-41-generic-x86_64-with-glibc2.35
- Python version: 3.11.3
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sanchit-gandhi
@sgugger
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
class CTCTrainer(Trainer):
def _prepare_inputs(self, inputs: Dict[str, Union[torch.Tensor, Any]]) -> Dict[str, Union[torch.Tensor, Any]]:
for k, v in inputs.items():
if isinstance(v, torch.Tensor):
kwargs = dict(device=self.args.device)
if self.deepspeed and inputs[k].dtype != torch.int64:
kwargs.update(dict(dtype=self.args.hf_deepspeed_config.dtype()))
inputs[k] = v.to(**kwargs)
if k == 'labels': # labels are list of tensor, not tensor, special handle here
for i in range(len(inputs[k])):
kwargs = dict(device=self.args.device)
if self.deepspeed and inputs[k][i].dtype != torch.int64:
kwargs.update(dict(dtype=self.args.hf_deepspeed_config.dtype()))
inputs[k][i] = inputs[k][i].to(**kwargs)
if self.args.past_index >= 0 and self._past is not None:
inputs["mems"] = self._past
return inputs
def training_step(self, model: nn.Module, inputs: Dict[str, Union[torch.Tensor, Any]]) -> torch.Tensor:
"""
Perform a training step on a batch of inputs.
Subclass and override to inject custom behavior.
Args:
model (:obj:`nn.Module`):
The model to train.
inputs (:obj:`Dict[str, Union[torch.Tensor, Any]]`):
The inputs and targets of the model.
The dictionary will be unpacked before being fed to the model. Most models expect the targets under the
argument :obj:`labels`. Check your model's documentation for all accepted arguments.
Return:
:obj:`torch.Tensor`: The tensor with training loss on this batch.
"""
model.train()
inputs = self._prepare_inputs(inputs)
if self.use_amp:
with autocast():
loss = self.compute_loss(model, inputs)
else:
loss = self.compute_loss(model, inputs)
if self.args.n_gpu > 1:
loss = loss.mean()
if self.args.gradient_accumulation_steps > 1:
loss = loss / self.args.gradient_accumulation_steps
if self.use_amp:
self.scaler.scale(loss).backward()
elif self.use_apex:
with amp.scale_loss(loss, self.optimizer) as scaled_loss:
scaled_loss.backward()
elif self.deepspeed:
self.deepspeed.backward(loss)
else:
loss.backward()
return loss.detach()
```
### Expected behavior
I am trying to run the codes from https://github.com/TideDancer/interspeech21_emotion
I tried my best to recreate the environment.
I am using datasets=1.4.2
transformers=4.4.2
I manually print out the dataset value at each stage to debug.
The datasets contains the following features:
```
Dataset({ features: ['emotion', 'file', 'input_values', 'sampling_rate', 'speech', 'text'],
num_rows: 507})
```
The dataset lost all its headers after initializing the Trainer. `my test1` is working well but `my test2` pops error.
```
print(val_dataset[0]['file'])
print('my test1----------------------------------')
val_dataset_original = val_dataset
trainer = CTCTrainer(
model=model,
data_collator=data_collator,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=train_dataset,
eval_dataset=val_dataset,
tokenizer=processor.feature_extractor,
)
print(val_dataset_original[0]['file'])
print('my test2----------------------------------')
```
It then pops `KeyError: file`. Then I print the dataset again, it turns out only 'input_values' is left.
If this is difficult to reproduce, is there a way I can deep copy the dataset? Because I need the 'file' information to write the output results.
I have tried `val_dataset_copy=val_dataset` but both the dataset variables will be affected by the initialization of the trainer.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24563/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24563/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24562
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24562/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24562/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24562/events
|
https://github.com/huggingface/transformers/issues/24562
| 1,779,811,473 |
I_kwDOCUB6oc5qFcSR
| 24,562 |
eval_loss returning nan
|
{
"login": "NHendrickson9616",
"id": 119625034,
"node_id": "U_kgDOByFVSg",
"avatar_url": "https://avatars.githubusercontent.com/u/119625034?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NHendrickson9616",
"html_url": "https://github.com/NHendrickson9616",
"followers_url": "https://api.github.com/users/NHendrickson9616/followers",
"following_url": "https://api.github.com/users/NHendrickson9616/following{/other_user}",
"gists_url": "https://api.github.com/users/NHendrickson9616/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NHendrickson9616/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NHendrickson9616/subscriptions",
"organizations_url": "https://api.github.com/users/NHendrickson9616/orgs",
"repos_url": "https://api.github.com/users/NHendrickson9616/repos",
"events_url": "https://api.github.com/users/NHendrickson9616/events{/privacy}",
"received_events_url": "https://api.github.com/users/NHendrickson9616/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Having the same problems.\r\n\r\nOnly getting the following out\r\n`({'eval_runtime': 5.4991, 'eval_samples_per_second': 65.829, 'eval_steps_per_second': 1.091, 'epoch': 2.0},)`",
"@glaand \r\n\r\nCould you provide a code snippet that could run directly? Thanks in advance!"
] | 1,687 | 1,691 | 1,691 |
NONE
| null |
### System Info
- `transformers` version: 4.28.0
- Platform: Linux-3.10.0-1160.24.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.12
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1+cu118 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Still new to this, don't know
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Here is the set up before I run the code:
```
trained_path = '/data/user/home/nchendri/biobertuab/modelInfo'
from transformers import Trainer, TrainingArguments
training_args = TrainingArguments(
output_dir=trained_path,
overwrite_output_dir=True,
num_train_epochs=32,
per_device_train_batch_size=16,
save_steps=10_000,
save_total_limit=2,
prediction_loss_only=True,
optim="adamw_torch",
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=dsTok['train'],
eval_dataset=dsTok['valid'],
)
eval = trainer.evaluate()
```
### Expected behavior
It was supposed to output an eval_loss, very new at this, could have missed something in above code setting stuff up.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24562/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24562/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24561
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24561/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24561/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24561/events
|
https://github.com/huggingface/transformers/pull/24561
| 1,779,786,606 |
PR_kwDOCUB6oc5UL-yx
| 24,561 |
llama fp16 torch.max bug fix
|
{
"login": "prathikr",
"id": 31260940,
"node_id": "MDQ6VXNlcjMxMjYwOTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/31260940?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prathikr",
"html_url": "https://github.com/prathikr",
"followers_url": "https://api.github.com/users/prathikr/followers",
"following_url": "https://api.github.com/users/prathikr/following{/other_user}",
"gists_url": "https://api.github.com/users/prathikr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/prathikr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prathikr/subscriptions",
"organizations_url": "https://api.github.com/users/prathikr/orgs",
"repos_url": "https://api.github.com/users/prathikr/repos",
"events_url": "https://api.github.com/users/prathikr/events{/privacy}",
"received_events_url": "https://api.github.com/users/prathikr/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@ArthurZucker I addressed your commment, please have another look. Thank you.",
"@amyeroberts I addressed your comment. Please merge asap, thank you."
] | 1,687 | 1,688 | 1,688 |
CONTRIBUTOR
| null |
This PR explicitly sets the threshold tensor for `torch.max` to the same dtype as that of attn_weights to avoid accidental upcasting during mixed precision training. This unblocks ONNX Runtime integration because, without this fix, the torch onnx exporter receives mismatched types into the `torch.max` operation resulting in the following error:
```
Traceback (most recent call last):
File "run_clm_ort.py", line 641, in <module>
main()
File "run_clm_ort.py", line 589, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/optimum/onnxruntime/trainer.py", line 454, in train
return inner_training_loop(
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/optimum/onnxruntime/trainer.py", line 749, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers/trainer.py", line 2735, in training_step
loss = self.compute_loss(model, inputs)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/optimum/onnxruntime/trainer.py", line 365, in compute_loss
return super().compute_loss(model_with_loss, inputs, return_outputs)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers/trainer.py", line 2767, in compute_loss
outputs = model(**inputs)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
ret_val = func(*args, **kwargs)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1675, in forward
loss = self.module(*inputs, **kwargs)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/onnxruntime/training/ortmodule/_utils.py", line 375, in _forward
return ortmodule._torch_module.forward(*inputs, **kwargs)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/onnxruntime/training/ortmodule/_utils.py", line 355, in _forward
return torch_module_ort._execution_manager(torch_module_ort.is_training()).forward(*inputs, **kwargs)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/onnxruntime/training/ortmodule/_training_manager.py", line 274, in forward
self._fallback_manager.handle_exception(
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/onnxruntime/training/ortmodule/_fallback.py", line 160, in handle_exception
raise exception
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/onnxruntime/training/ortmodule/_training_manager.py", line 210, in forward
self._initialize_graph_builder()
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/onnxruntime/training/ortmodule/_graph_execution_manager.py", line 502, in _initialize_graph_builder
self._graph_builder.initialize(self._onnx_models.exported_model.SerializeToString(), grad_builder_config)
RuntimeError: /onnxruntime_src/orttraining/orttraining/python/orttraining_pybind_state.cc:786 onnxruntime::python::addObjectMethodsForTraining(pybind11::module&, onnxruntime::python::ExecutionProviderRegistrationFn)::<lambda(onnxruntime::training::OrtModuleGraphBuilder*, const pybind11::bytes&, const onnxruntime::training::OrtModuleGraphBuilderConfiguration&)> [ONNXRuntimeError] : 1 : FAIL : Type Error: Type parameter (T) of Optype (Max) bound to different types (tensor(float16) and tensor(float) in node (/_original_module/base_model/model/layers.0/self_attn/Max).
```
Reproduction Intructions:
```Dockerfile
FROM mcr.microsoft.com/azureml/aifx/stable-ubuntu2004-cu117-py38-torch1131
# language-modeling dependencies taken from: https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/requirements.txt
RUN pip install accelerate datasets sentencepiece protobuf evaluate scikit-learn
# additional Hugging Face dependencies
RUN pip install optimum peft transformers
RUN git clone https://github.com/huggingface/optimum.git && \
cd optimum/examples/onnxruntime/language-modelling && \
python run_clm.py --model_name_or_path openlm-research/open_llama_7b_400bt_preview --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --per_device_train_batch_size 1 --per_device_eval_batch_size 1 --do_train --output_dir output_dir --overwrite_output_dir --fp16 --deepspeed zero_stage_1.json --num_train_epochs 1 --logging_steps 1 --optim adamw_ort_fused
```
Who can review?
- text models: @ArthurZucker and @younesbelkada
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24561/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24561/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24561",
"html_url": "https://github.com/huggingface/transformers/pull/24561",
"diff_url": "https://github.com/huggingface/transformers/pull/24561.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24561.patch",
"merged_at": 1688483112000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24560
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24560/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24560/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24560/events
|
https://github.com/huggingface/transformers/pull/24560
| 1,779,785,251 |
PR_kwDOCUB6oc5UL-gE
| 24,560 |
Update masked_language_modeling.md
|
{
"login": "condor-cp",
"id": 40066676,
"node_id": "MDQ6VXNlcjQwMDY2Njc2",
"avatar_url": "https://avatars.githubusercontent.com/u/40066676?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/condor-cp",
"html_url": "https://github.com/condor-cp",
"followers_url": "https://api.github.com/users/condor-cp/followers",
"following_url": "https://api.github.com/users/condor-cp/following{/other_user}",
"gists_url": "https://api.github.com/users/condor-cp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/condor-cp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/condor-cp/subscriptions",
"organizations_url": "https://api.github.com/users/condor-cp/orgs",
"repos_url": "https://api.github.com/users/condor-cp/repos",
"events_url": "https://api.github.com/users/condor-cp/events{/privacy}",
"received_events_url": "https://api.github.com/users/condor-cp/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,687 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
Improves masked_language_modeling documentation. See https://github.com/huggingface/transformers/issues/24546
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24560/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24560/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24560",
"html_url": "https://github.com/huggingface/transformers/pull/24560",
"diff_url": "https://github.com/huggingface/transformers/pull/24560.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24560.patch",
"merged_at": 1687989261000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24559
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24559/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24559/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24559/events
|
https://github.com/huggingface/transformers/pull/24559
| 1,779,774,406 |
PR_kwDOCUB6oc5UL8H0
| 24,559 |
Fix Typo
|
{
"login": "tony9402",
"id": 30228292,
"node_id": "MDQ6VXNlcjMwMjI4Mjky",
"avatar_url": "https://avatars.githubusercontent.com/u/30228292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tony9402",
"html_url": "https://github.com/tony9402",
"followers_url": "https://api.github.com/users/tony9402/followers",
"following_url": "https://api.github.com/users/tony9402/following{/other_user}",
"gists_url": "https://api.github.com/users/tony9402/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tony9402/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tony9402/subscriptions",
"organizations_url": "https://api.github.com/users/tony9402/orgs",
"repos_url": "https://api.github.com/users/tony9402/repos",
"events_url": "https://api.github.com/users/tony9402/events{/privacy}",
"received_events_url": "https://api.github.com/users/tony9402/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Can you check if this is correct @ArthurZucker ?",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,687 | 1,688 | 1,688 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixed wrong annotation
- `(seq_len, BS, model_dim) -> (BS, seq_len, model_dim)` -> `(BS, seq_len, model_dim) -> (seq_len, BS, model_dim)`
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24559/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24559/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24559",
"html_url": "https://github.com/huggingface/transformers/pull/24559",
"diff_url": "https://github.com/huggingface/transformers/pull/24559.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24559.patch",
"merged_at": 1688040247000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24558
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24558/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24558/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24558/events
|
https://github.com/huggingface/transformers/issues/24558
| 1,779,626,220 |
I_kwDOCUB6oc5qEvDs
| 24,558 |
Error when setting a high batch-size: `AttributeError: 'NoneType' object has no attribute 'backward'`
|
{
"login": "orangetin",
"id": 126978607,
"node_id": "U_kgDOB5GKLw",
"avatar_url": "https://avatars.githubusercontent.com/u/126978607?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/orangetin",
"html_url": "https://github.com/orangetin",
"followers_url": "https://api.github.com/users/orangetin/followers",
"following_url": "https://api.github.com/users/orangetin/following{/other_user}",
"gists_url": "https://api.github.com/users/orangetin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/orangetin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/orangetin/subscriptions",
"organizations_url": "https://api.github.com/users/orangetin/orgs",
"repos_url": "https://api.github.com/users/orangetin/repos",
"events_url": "https://api.github.com/users/orangetin/events{/privacy}",
"received_events_url": "https://api.github.com/users/orangetin/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
] |
[
"cc @muellerzr (?)",
"@pacman100 could there be something more I need to check/do related to the deepspeed plugin when doing this that we might be missing? (basically is there a separate param that we should set on the batch size for the train bs here)",
"I can repro this so let me know if you need more logs. I'm trying to debug this myself too.",
"@orangetin can you tell us more about the deepspeed configuration you are using, how you are launching the script, and the args used? It looks like deepspeed isn't being properly set in the Accelerator hence the issue (or something on those lines). I have a feeling if you don't use deepspeed it will work",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@muellerzr \r\nSame here. Problem occurs only when `per_device_train_batch_size` is too large. but it's strange that when I used another tokenizer, things went right, and `--auto_find_batch_size` worked normally.\r\n\r\nHere is my command to run `run_clm.py`(only a part of it) and `deepspeed.config`. \r\n```command\r\ndeepspeed --include localhost:4,5,6,7 run_clm.py --model_type gpt2 --do_train --do_eval --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --auto_find_batch_size True --gradient_accumulation_steps 16 --learning_rate 0.001 --fp16 False --fp16_full_eval False\r\n```\r\n\r\n```config\r\n{\r\n \"fp16\": {\r\n \"enabled\": false\r\n },\r\n \"zero_optimization\": {\r\n \"stage\": 2,\r\n \"allgather_partitions\": true,\r\n \"allgather_bucket_size\": 2e8,\r\n \"overlap_comm\": true,\r\n \"reduce_scatter\": true,\r\n \"reduce_bucket_size\": 2e8,\r\n \"contiguous_gradients\": true\r\n },\r\n \"optimizer\": {\r\n \"type\": \"AdamW\",\r\n \"params\": {\r\n \"lr\": 0.001,\r\n \"betas\": [\r\n 0.9,\r\n 0.999\r\n ],\r\n \"eps\": 1e-8,\r\n \"weight_decay\": 0\r\n }\r\n },\r\n \"scheduler\": {\r\n \"type\": \"WarmupLR\",\r\n \"params\": {\r\n \"warmup_min_lr\": 0,\r\n \"warmup_max_lr\": \"auto\",\r\n \"warmup_num_steps\": \"auto\"\r\n }\r\n },\r\n \"train_micro_batch_size_per_gpu\": \"auto\"\r\n}\r\n\r\n\r\n```\r\n\r\n\r\nAnd my error track\r\n```\r\n File \"/xxxx/run_clm.py\", line 679, in <module>\r\n main()\r\n File \"/xxxx/run_clm.py\", line 627, in main\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n File \"/xxxx/lib/python3.10/site-packages/transformers/trainer.py\", line 1555, in train\r\n return inner_training_loop(\r\n File \"/xxxx/lib/python3.10/site-packages/accelerate/utils/memory.py\", line 136, in decorator\r\n return function(batch_size, *args, **kwargs)\r\n File \"/xxxx/lib/python3.10/site-packages/transformers/trainer.py\", line 1837, in _inner_training_loop\r\n tr_loss_step = self.training_step(model, inputs)\r\n File \"/xxxx/lib/python3.10/site-packages/transformers/trainer.py\", line 2693, in training_step\r\n self.accelerator.backward(loss)\r\n File \"/xxxxxxxxxxx/lib/python3.10/site-packages/accelerate/accelerator.py\", line 1917, in backward\r\n self.deepspeed_engine_wrapped.backward(loss, **kwargs)\r\nAttributeError: 'NoneType' object has no attribute 'backward'\r\n\r\n```",
"Thanks @ekkkkki for the context. I must have missed this. @muellerzr is this enough to go on or would you like more details?",
"Thanks for the ping, I'll take a look at this today or tommorow!",
"any updates for that bug? I can't run it even with batch_size of 2 or 8 (tried in sagemaker with ml.g5.12xlarge and ml.g4dn.12xlarge)\r\nI am out of ideas, even tried to go back to the commit in 21 Aug (which worked for me) and it doesn't (both transformers and accelerate) with deepspeed 0.10.0",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"cc @muellerzr are you still working on this? ",
"I think this ought to be reopened. BTW, it only happens to me when I try to finetune a Llama2 derivative but not when I finetune Mistral or Zephyr. Disabling `auto_find_batch_size` is indeed a workaround, but I'd really like to use that awesome feature.\r\n\r\n",
"cc @pacman100 ",
"The problem seems to be that `release_memory` clears out the deepspeed_engine_wrapped attribute\r\n\r\nhttps://github.com/huggingface/transformers/blob/35478182ce50d04bde5c4ecd0569c2f6ba15bee7/src/transformers/trainer.py#L1547\r\n\r\nwhenever we re-enter `_inner_training_loop`\r\nwhich would have been fine but once the model is already wrapped in the previous try, `accelerator.prepare` will not be called leaving `accelerator.deepspeed_engine_wrapped` None\r\nhttps://github.com/huggingface/transformers/blob/35478182ce50d04bde5c4ecd0569c2f6ba15bee7/src/transformers/trainer.py#L1655-L1660\r\n\r\nAny hacks to get around this?",
"Well for now I am resolving it like so\r\n```\r\nclass HFTrainer(Trainer):\r\n def _inner_training_loop(self, batch_size=None, args=None, resume_from_checkpoint=None, trial=None, ignore_keys_for_eval=None):\r\n # Hack to fix: https://github.com/huggingface/transformers/issues/24558\r\n if self.args.auto_find_batch_size:\r\n self.model_wrapped = self.model\r\n self.deepspeed = None\r\n return super()._inner_training_loop(batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)\r\n```\r\n\r\nI am not entirely sure if this is correct or would even the right decision for Zero 3, but at least makes Zero 1 and 2 work with auto batch size finder",
"So the workaround I was using worked fine for Llama 7B, but with Mistral 7B it is behaving weirdly, it seems to release memory on only one rank but not the other (using only 2 GPUs at the moment) and Trainer gets stuck completely\r\n```\r\n23291MiB\r\n9439MiB\r\n```\r\n\r\nIt seems like one rank went OOM and decided to adjust the batch to lower but the other rank didn't\r\nI'll debug some more but @pacman100 @sgugger can use some help from you for a proper fix ๐
\r\n\r\nMy setup looks like this:\r\n```\r\ntorch==2.1.1+cu118\r\ntransformers[accelerate,deepspeed,sentencepiece,tokenizers]==4.36.1\r\ndatasets==2.14.7\r\npeft==0.6.2\r\nbitsandbytes==0.41.3.post2\r\n```\r\nI am trying 4 bit qlora on 7B models with 2 GPUs\r\n\r\n---\r\n**EDIT:** On reading some code, when I enable this auto batch size finder, I wonder how do the ranks sync and agree on the same per-device batch size?\r\n\r\n---\r\n**EDIT**: It doesn't seem like the batch size gets correctly set in the deepspeed plugin once it is re-adjusted, so I am not sure if the optimizers, schedulers get initialized correctly ๐ค ",
"Hello, there are a lot of things being discussed in this single issue.\r\n\r\n> I am trying 4 bit qlora on 7B models with 2 GPUs\r\n\r\nI don't think qlora is supported with DeepSpeed.\r\n\r\n> The problem seems to be that release_memory clears out the deepspeed_engine_wrapped attribute\r\n> [transformers/src/transformers/trainer.py](https://github.com/huggingface/transformers/blob/35478182ce50d04bde5c4ecd0569c2f6ba15bee7/src/transformers/trainer.py#L1547)\r\n> \r\n> Line 1547 in [3547818](https://github.com/huggingface/transformers/commit/35478182ce50d04bde5c4ecd0569c2f6ba15bee7)\r\n> \r\n> self.accelerator.free_memory() \r\n> whenever we re-enter _inner_training_loop\r\n> which would have been fine but once the model is already wrapped in the previous try, accelerator.prepare will not be called leaving accelerator.deepspeed_engine_wrapped None\r\n\r\nThanks for providing more details. This is a niche issue and based on the available bandwidth, we will prioritize it.\r\n",
"> Hello, there are a lot of things being discussed in this single issue.\r\n\r\nAgreed, sorry for that ๐
\r\n\r\n> I don't think qlora is supported with DeepSpeed.\r\n\r\nInteresting would like a separate discussion for this. It seems to work fine with Zero 2 with static batch size - even compared the loss curves with DDP - they are almost the same. Theoretically, also it makes sense as only optimizer and gradients will be sharded which in qlora are only the trainable adapters in bfloat16/float16/float32. I have seen the community using axolotl also use it successfully. Zero 3 indeed does not work. Anyway, not the topic for this issue.\r\n\r\nThe only reason I brought that up here is because Deepspeed Zero sharding can cause uneven consumption on GPUs and the ranks can then disagree on batch sizes and everything gets stuck\r\n",
"> I don't think qlora is supported with DeepSpeed.\r\n\r\nI use DeepSpeed (ZeRO-2) with both LoRA and QLoRA, and it works greatโuntil I enable `auto_find_batch_size`.\r\n\r\n> Thanks for providing more details. This is a niche issue and based on the available bandwidth, we will prioritize it.\r\n\r\nThis is a niche issue? I feel like most people would rather make use of `auto_find_batch_size` and avoid OOM errors with ease. BTW, I was wrong. This problem does occur when finetuning Llama2 models.",
"> I use DeepSpeed (ZeRO-2) with both LoRA and QLoRA, and it works greatโuntil I enable auto_find_batch_size.\r\n\r\nNice, I meant DeepSpeed ZeRO 3 + QLoRA, should have been clear about it.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I definitely don't want to see this issue marked as stale.",
"@mhillebrand it'll be closed after we merge #28088 which adds the support in for auto batch size finder :) ",
"> @mhillebrand it'll be closed after we merge #28088 which adds the support in for auto batch size finder :)\r\n\r\nAh, I didn't see the linked PR from a month ago. Thank you!"
] | 1,687 | 1,704 | 1,704 |
NONE
| null |
### System Info
Transformers version: latest@github
Accelerate version: latest@github
Deepspeed version: latest@github
### Who can help?
@pacman100 @sgugger
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Script: https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py
Use a high `per_device_batch_size` and let `Trainer` drop the batch size. Torchrun launcher with Deepspeed-Zero2.
```
[INFO|trainer.py:1786] 2023-06-28 09:03:54,973 >> ***** Running training *****
[INFO|trainer.py:1787] 2023-06-28 09:03:54,973 >> Num examples = 338
[INFO|trainer.py:1788] 2023-06-28 09:03:54,973 >> Num Epochs = 4
[INFO|trainer.py:1789] 2023-06-28 09:03:54,973 >> Instantaneous batch size per device = 32
[INFO|trainer.py:1790] 2023-06-28 09:03:54,973 >> Total train batch size (w. parallel, distributed & accumulation) = 256
[INFO|trainer.py:1791] 2023-06-28 09:03:54,973 >> Gradient Accumulation steps = 1
[INFO|trainer.py:1792] 2023-06-28 09:03:54,973 >> Total optimization steps = 8
[INFO|trainer.py:1793] 2023-06-28 09:03:54,974 >> Number of trainable parameters = 8,388,608
0%| | 0/8 [00:00<?, ?it/s][INFO|trainer.py:1786] 2023-06-28 09:04:12,933 >> ***** Running training *****
[INFO|trainer.py:1787] 2023-06-28 09:04:12,933 >> Num examples = 338
[INFO|trainer.py:1788] 2023-06-28 09:04:12,934 >> Num Epochs = 4
[INFO|trainer.py:1789] 2023-06-28 09:04:12,934 >> Instantaneous batch size per device = 16
[INFO|trainer.py:1790] 2023-06-28 09:04:12,934 >> Total train batch size (w. parallel, distributed & accumulation) = 256
[INFO|trainer.py:1791] 2023-06-28 09:04:12,934 >> Gradient Accumulation steps = 1
[INFO|trainer.py:1792] 2023-06-28 09:04:12,934 >> Total optimization steps = 12
[INFO|trainer.py:1793] 2023-06-28 09:04:12,936 >> Number of trainable parameters = 8,388,608
0%| | 0/8 [00:16<?, ?it/s]
Traceback (most recent call last):t/s]
File "/app/finetune.py", line 796, in <module>
main()
File "/app/finetune.py", line 732, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 1645, in train
return inner_training_loop(
File "/usr/local/lib/python3.8/dist-packages/accelerate/utils/memory.py", line 132, in decorator
return function(batch_size, *args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 1938, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 2770, in training_step
self.accelerator.backward(loss)
File "/usr/local/lib/python3.8/dist-packages/accelerate/accelerator.py", line 1849, in backward
self.deepspeed_engine_wrapped.backward(loss, **kwargs)
AttributeError: 'NoneType' object has no attribute 'backward'
```
In this case, I set the `per_device_train_batch_size` to 32 which is too large for an A100-80 (knowingly). Trainer drops the batch-size from 32 to 16 when it overflows (which is expected behavior) but then fails because of ` self.accelerator.backward(loss)`.
Don't see this issue when I set a batch-size that fits the GPU, only when it overflows. I suspect `accelerator.prepare` needs to be called again with the corrected batch-size.
### Expected behavior
Trainer drops the batch size from 32 to 16 and training continues without failure.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24558/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/24558/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24557
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24557/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24557/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24557/events
|
https://github.com/huggingface/transformers/pull/24557
| 1,779,417,702 |
PR_kwDOCUB6oc5UKrRS
| 24,557 |
Make PT/Flax tests could be run on GPU
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,687 | 1,687 | 1,687 |
COLLABORATOR
| null |
# What does this PR do?
We don't have jax/flax on our CI runner, so no issue. But when trying to run those tests on a GPU machine with jax/flax installed, we have error. See comment.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24557/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24557/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24557",
"html_url": "https://github.com/huggingface/transformers/pull/24557",
"diff_url": "https://github.com/huggingface/transformers/pull/24557.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24557.patch",
"merged_at": 1687975862000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24556
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24556/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24556/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24556/events
|
https://github.com/huggingface/transformers/pull/24556
| 1,779,382,865 |
PR_kwDOCUB6oc5UKji6
| 24,556 |
Update PT/Flax weight conversion after #24030
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,687 | 1,687 | 1,687 |
COLLABORATOR
| null |
# What does this PR do?
Similar to #24547
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24556/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24556/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24556",
"html_url": "https://github.com/huggingface/transformers/pull/24556",
"diff_url": "https://github.com/huggingface/transformers/pull/24556.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24556.patch",
"merged_at": 1687974271000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24555
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24555/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24555/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24555/events
|
https://github.com/huggingface/transformers/pull/24555
| 1,779,313,457 |
PR_kwDOCUB6oc5UKUFj
| 24,555 |
[`InstructBlip`] Add instruct blip int8 test
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,687 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
# What does this PR do?
Addresses: https://github.com/huggingface/transformers/pull/24490#discussion_r1242098216
Also fixes an inconsistency with Blip / Blip2 processors in order for users to be able to call `.to()` with both device and dtype argument to cast the input in half precision if necessary. Happy to move that in another PR if needed
cc @sgugger @ydshieh
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24555/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24555/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24555",
"html_url": "https://github.com/huggingface/transformers/pull/24555",
"diff_url": "https://github.com/huggingface/transformers/pull/24555.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24555.patch",
"merged_at": 1687971991000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24554
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24554/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24554/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24554/events
|
https://github.com/huggingface/transformers/pull/24554
| 1,779,210,981 |
PR_kwDOCUB6oc5UJ9gW
| 24,554 |
Fix processor __init__ bug if image processor undefined
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,687 | 1,687 | 1,687 |
COLLABORATOR
| null |
# What does this PR do?
Fixes a bug which occurs if a processor is initialized with `image_processor=None`. An exception should be raised, saying an image processor should be defined, but at the moment fails because the code references `feature_extractor` which is not defined if it's not in the kwargs.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24554/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24554/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24554",
"html_url": "https://github.com/huggingface/transformers/pull/24554",
"diff_url": "https://github.com/huggingface/transformers/pull/24554.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24554.patch",
"merged_at": 1687969047000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24553
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24553/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24553/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24553/events
|
https://github.com/huggingface/transformers/pull/24553
| 1,779,201,619 |
PR_kwDOCUB6oc5UJ7du
| 24,553 |
Update `EncodecIntegrationTest`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> Thanks @ydshieh - just to confirm, these updated values come from running the model on CUDA? (versus the original values, which were obtained on CPU)\r\n\r\nYes, it's on GPU. More specifically, on our CI runner's T4 GPU.",
"Perfect, thanks for updating the tests for CUDA! Also cc @ArthurZucker as a heads-up since we discussed this offline previously"
] | 1,687 | 1,688 | 1,687 |
COLLABORATOR
| null |
# What does this PR do?
Some test fail since the addition of this model. This PR just updates the expect output values.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24553/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24553/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24553",
"html_url": "https://github.com/huggingface/transformers/pull/24553",
"diff_url": "https://github.com/huggingface/transformers/pull/24553.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24553.patch",
"merged_at": 1687968101000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24552
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24552/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24552/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24552/events
|
https://github.com/huggingface/transformers/pull/24552
| 1,779,135,095 |
PR_kwDOCUB6oc5UJs4g
| 24,552 |
Update old existing feature extractor references
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@ydshieh Thanks for pointing out the extra places I missed. I've updated + other vision files needing the same update."
] | 1,687 | 1,688 | 1,688 |
COLLABORATOR
| null |
# What does this PR do?
Updates a bunch of old references to feature extractors for vision models.
Most of the code isn't public facing in e.g. docs, but is often copied when new models are added.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24552/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24552/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24552",
"html_url": "https://github.com/huggingface/transformers/pull/24552",
"diff_url": "https://github.com/huggingface/transformers/pull/24552.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24552.patch",
"merged_at": 1688030257000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24551
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24551/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24551/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24551/events
|
https://github.com/huggingface/transformers/issues/24551
| 1,779,061,904 |
I_kwDOCUB6oc5qClSQ
| 24,551 |
LoRA training adapter_model.bin is 888 bytes always
|
{
"login": "kallewoof",
"id": 250224,
"node_id": "MDQ6VXNlcjI1MDIyNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/250224?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kallewoof",
"html_url": "https://github.com/kallewoof",
"followers_url": "https://api.github.com/users/kallewoof/followers",
"following_url": "https://api.github.com/users/kallewoof/following{/other_user}",
"gists_url": "https://api.github.com/users/kallewoof/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kallewoof/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kallewoof/subscriptions",
"organizations_url": "https://api.github.com/users/kallewoof/orgs",
"repos_url": "https://api.github.com/users/kallewoof/repos",
"events_url": "https://api.github.com/users/kallewoof/events{/privacy}",
"received_events_url": "https://api.github.com/users/kallewoof/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @younesbelkada ",
"I backtracked until I was on the original tloen/alpaca-lora script, using the original dataset on int8 model, and I am still seeing the 888 byte adapter_model.bin file, along with a bunch of other files that you normally don't see in a LoRA output (optimizer.pt, etc).",
"I am closing this as I don't think I will be able to provide adequate feedback for a clean fix, and I've moved on to a different approach. Sorry for wasted time."
] | 1,687 | 1,688 | 1,688 |
NONE
| null |
### System Info
Ubuntu 22.04
Python 3.10.11
transformers 4.30.2
peft 0.4.0.dev0
accelerate 0.20.3
### Who can help?
@ArthurZucker
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Code is at https://github.com/kallewoof/alpaca-lora/blob/202306-ooba-imports/finetune.py
You need a text file with text content (i.e. not instruction-based). Anything goes.
Run the above script: `python ./finetune.py --base_model=MODEL --raw_data_path=TEXTFILE --batch_size=32 --micro_batch_size=1 --num_epochs=2 --lora_r=128 --lora_alpha=256 --cutoff_len=512 --overlap_len=256 --save_steps=1`
### Expected behavior
Expected: After one (each) iteration, a checkpoint should be saved with an adapter_model.bin file that contains the LoRA weights.
Got: The checkpoints are made, but the adapter_model.bin file is only 888 bytes and does not grow (there are two versions; both are wrong, one contains only the LoRA stuff, the other one contains optimizer.pt, rng_state.pth, scheduler.pt, etc. which I have no idea why they are saved for a LoRA.)
Note: using the exact same conda environment, Oobabooga is able to generate LoRAs after loading a model in 4 bit with double quant, where the above finetune.py fails to do so. I have also verified that the resulting raw data dataset is identical between the two code bases.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24551/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24551/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24550
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24550/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24550/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24550/events
|
https://github.com/huggingface/transformers/pull/24550
| 1,779,016,980 |
PR_kwDOCUB6oc5UJSsO
| 24,550 |
fix type annotations for arguments in training_args
|
{
"login": "shauray8",
"id": 39147312,
"node_id": "MDQ6VXNlcjM5MTQ3MzEy",
"avatar_url": "https://avatars.githubusercontent.com/u/39147312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shauray8",
"html_url": "https://github.com/shauray8",
"followers_url": "https://api.github.com/users/shauray8/followers",
"following_url": "https://api.github.com/users/shauray8/following{/other_user}",
"gists_url": "https://api.github.com/users/shauray8/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shauray8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shauray8/subscriptions",
"organizations_url": "https://api.github.com/users/shauray8/orgs",
"repos_url": "https://api.github.com/users/shauray8/repos",
"events_url": "https://api.github.com/users/shauray8/events{/privacy}",
"received_events_url": "https://api.github.com/users/shauray8/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Fixing CI errors! ",
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger I don't particularly understand why this error occurs in examples_flax - ``` argparse.ArgumentError: argument --sharded_ddp: invalid typing.Union[str, bool, typing.List[transformers.trainer_utils.ShardedDDPOption], NoneType] value: '' ```",
"@sgugger `bool` breaks `--sharded_ddp`, I think we can still maintain Boolean arguments with string itself and \r\nhttps://github.com/huggingface/transformers/blob/20d6b84613984f2497587a62774704882ccbeee6/src/transformers/hf_argparser.py#L168-L173\r\nwith this `--sharded_ddp` and `--fsdp` defaults to string ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24550). All of your documentation changes will be reflected on that endpoint."
] | 1,687 | 1,689 | 1,689 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #24538 which is basically fixing type annotations for `fsdp`, `fsdp_config`, and `sharded_ddp` in training_args.py
## Who can review?
maybe @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24550/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24550/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24550",
"html_url": "https://github.com/huggingface/transformers/pull/24550",
"diff_url": "https://github.com/huggingface/transformers/pull/24550.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24550.patch",
"merged_at": 1689862394000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24549
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24549/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24549/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24549/events
|
https://github.com/huggingface/transformers/pull/24549
| 1,779,010,226 |
PR_kwDOCUB6oc5UJRKx
| 24,549 |
Fix typing annotations for FSDP and DeepSpeed in TrainingArguments
|
{
"login": "mryab",
"id": 16766985,
"node_id": "MDQ6VXNlcjE2NzY2OTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/16766985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mryab",
"html_url": "https://github.com/mryab",
"followers_url": "https://api.github.com/users/mryab/followers",
"following_url": "https://api.github.com/users/mryab/following{/other_user}",
"gists_url": "https://api.github.com/users/mryab/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mryab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mryab/subscriptions",
"organizations_url": "https://api.github.com/users/mryab/orgs",
"repos_url": "https://api.github.com/users/mryab/repos",
"events_url": "https://api.github.com/users/mryab/events{/privacy}",
"received_events_url": "https://api.github.com/users/mryab/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
" argument --deepspeed: invalid Dict value: './ds_config_zero3.json' i am face this after merge, i have change deepspeed to Optional[str] and it worked ",
"Ok, let's revert then as it's a purely cosmetic change."
] | 1,687 | 1,688 | 1,687 |
CONTRIBUTOR
| null |
# What does this PR do?
According to the the docstrings and to the code, options `fsdp_config` and `deepspeed` of `TrainingArguments` accept dictionaries with configs for FSDP and DeepSpeed correspondingly. However, the typing annotations for both of them mention only `str` as valid arguments, which makes typing checkers fail when trying to pass a dictionary for these options.
This PR fixes the problem by making these options accept `Dict` as well, also fixing a couple of minor typos in their descriptions.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24549/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24549/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24549",
"html_url": "https://github.com/huggingface/transformers/pull/24549",
"diff_url": "https://github.com/huggingface/transformers/pull/24549.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24549.patch",
"merged_at": 1687962977000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24548
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24548/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24548/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24548/events
|
https://github.com/huggingface/transformers/issues/24548
| 1,778,981,894 |
I_kwDOCUB6oc5qCRwG
| 24,548 |
VS Code Pylance does not highlight transformers imports
|
{
"login": "mmlynarik",
"id": 44208384,
"node_id": "MDQ6VXNlcjQ0MjA4Mzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/44208384?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mmlynarik",
"html_url": "https://github.com/mmlynarik",
"followers_url": "https://api.github.com/users/mmlynarik/followers",
"following_url": "https://api.github.com/users/mmlynarik/following{/other_user}",
"gists_url": "https://api.github.com/users/mmlynarik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mmlynarik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mmlynarik/subscriptions",
"organizations_url": "https://api.github.com/users/mmlynarik/orgs",
"repos_url": "https://api.github.com/users/mmlynarik/repos",
"events_url": "https://api.github.com/users/mmlynarik/events{/privacy}",
"received_events_url": "https://api.github.com/users/mmlynarik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Sounds like an issue for VsCode or Pylance no?",
"This is probably the result of lazy module loading, or even the simple absence of a `__all__` (as far as I can see). This may not be something fixable within `transformers` but it does result from a design choice. https://github.com/huggingface/transformers/blob/main/src/transformers/__init__.py#L7139",
"The `__all__` is defined in the `_LazyModule` as you can see [here](https://github.com/huggingface/transformers/blob/6c57ce15587810968d64fb4b5700b63726397194/src/transformers/utils/import_utils.py#L1048).",
"> Sounds like an issue for VsCode or Pylance no?\r\n\r\nMight be, but itโs surprising that tokenizers and datasets which are two most closely related libriaries from the HF ecosystem are functioning correctly. ",
"Pretty sure this is fixed now, per the above issue on the Pylance repo.\r\n\r\nFix repros on my side; if anyone else would care to confirm, we can probably close this issue.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,687 | 1,693 | 1,693 |
NONE
| null |
### System Info
When Pylance is used as language server for VSCode, it does not highlight `transformers` imports even though library is correctly installed. Classes imports and library name itself are gray instead of yellow, see the enclosed figure. I'm not sure if it relates to Pylance itself, but it would be nice if this behavior was fixed.

### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Create and activate new virtual environment
2. Install transformers 4.30.2
3. Write into a new python module: `from transformers import BatchEncoding`
### Expected behavior
The import statement should be highlighted the same way as other libraries imports.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24548/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24548/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24547
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24547/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24547/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24547/events
|
https://github.com/huggingface/transformers/pull/24547
| 1,778,893,644 |
PR_kwDOCUB6oc5UI3eY
| 24,547 |
Update PT/TF weight conversion after #24030
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,687 | 1,687 | 1,687 |
COLLABORATOR
| null |
# What does this PR do?
Update PT/TF weight conversion due to the change in #24030.
(can do PT/Flax too in the same PR, but request a review first anyway)
### Code snippet to show issues and verify this PR's effect
(Failing for `main + nightly torch`. Pass for `PR + nightly torch` and `main/PR + stable torch`)
```python
import transformers
from transformers import TFWav2Vec2Model
from tests.models.wav2vec2.test_modeling_tf_wav2vec2 import TFWav2Vec2ModelTest
self = TFWav2Vec2ModelTest()
self.setUp()
model_class = TFWav2Vec2Model
allow_missing_keys = False
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
pt_model_class_name = model_class.__name__[2:] # Skip the "TF" at the beginning
pt_model_class = getattr(transformers, pt_model_class_name)
tf_model = model_class(config)
pt_model = pt_model_class(config)
tf_inputs_dict = self._prepare_for_class(inputs_dict, model_class)
# Check we can load pt model in tf and vice-versa with model => model functions
try:
_tf_model = transformers.load_pytorch_model_in_tf2_model(
tf_model, pt_model, tf_inputs=tf_inputs_dict, allow_missing_keys=allow_missing_keys
)
except:
_tf_model = None
try:
_pt_model = transformers.load_tf2_model_in_pytorch_model(
pt_model, tf_model, allow_missing_keys=allow_missing_keys
)
except:
_pt_model = None
if _tf_model is None:
print("_tf_model fails")
else:
print("_tf_model OK")
if _pt_model is None:
print("_pt_model fails")
else:
print("_pt_model OK")
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24547/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24547/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24547",
"html_url": "https://github.com/huggingface/transformers/pull/24547",
"diff_url": "https://github.com/huggingface/transformers/pull/24547.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24547.patch",
"merged_at": 1687963017000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24546
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24546/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24546/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24546/events
|
https://github.com/huggingface/transformers/issues/24546
| 1,778,822,285 |
I_kwDOCUB6oc5qBqyN
| 24,546 |
DataCollatorForLanguageModeling call of tokenizer.pad causes crash
|
{
"login": "condor-cp",
"id": 40066676,
"node_id": "MDQ6VXNlcjQwMDY2Njc2",
"avatar_url": "https://avatars.githubusercontent.com/u/40066676?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/condor-cp",
"html_url": "https://github.com/condor-cp",
"followers_url": "https://api.github.com/users/condor-cp/followers",
"following_url": "https://api.github.com/users/condor-cp/following{/other_user}",
"gists_url": "https://api.github.com/users/condor-cp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/condor-cp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/condor-cp/subscriptions",
"organizations_url": "https://api.github.com/users/condor-cp/orgs",
"repos_url": "https://api.github.com/users/condor-cp/repos",
"events_url": "https://api.github.com/users/condor-cp/events{/privacy}",
"received_events_url": "https://api.github.com/users/condor-cp/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"This data collator is not meant to be used with labels in your data. It builts the labels from the input IDs (either by masking random tokens or copying the inputs).",
"I understand thank you. Then this line in the documentation is misleading, since it prepares a 'labels' feature (which works in the example because all samples are the same size) : \r\nhttps://github.com/huggingface/transformers/blob/daccde143d646e4fec8d52cc870b8c7fd1d2581c/docs/source/en/tasks/masked_language_modeling.md?plain=1#L171\r\n\r\nMaybe in the case a 'labels' feature is provided, the error could be caught earlier than trying to convert to pytorch tensor ?",
"Yes this line should probably be removed. I think it's a copy-paste from a case in our examples where we use the standard data collator after.",
"Ok thank you I will pull-request this line removal.",
"@sgugger what if I do want to use labels and this data collator? specifically in my case i create the labels myself and put -100 in all \"user\" messages of a conversation. is there some parameter I can use? or do I have to create my own data collator?"
] | 1,687 | 1,707 | 1,687 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.30.2
- Platform: Linux-5.19.0-45-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoTokenizer, DataCollatorForLanguageModeling
tokenizer = AutoTokenizer.from_pretrained("distilroberta-base")
multiple_length_batch = [
{
"input_ids": [0, 51, 51, 2],
"labels": [0, 51, 51, 2],
},
{
"input_ids": [0, 10, 11, 12, 13, 2],
"labels": [0, 10, 11, 12, 13, 2],
},
]
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer,
mlm_probability=0.15,
)
data_collator.torch_call(multiple_length_batch)
```
Causes
```sh
ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your features (`labels` in this case) have excessive nesting (inputs type `list` where type `int` is expected).
```
### Expected behavior
I simplified the problem analysis, the minimal example of the bug reproduction was found by calling a trainer using this same datacollator, (the failing function gets called here : https://github.com/huggingface/transformers/blob/e84bf1f734f87aa2bedc41b9b9933d00fc6add98/src/transformers/data/data_collator.py#L45)
In the case input_ids and labels are not padded manually before, this causes DataCollatorForLanguageModeling to crash here :
https://github.com/huggingface/transformers/blob/e84bf1f734f87aa2bedc41b9b9933d00fc6add98/src/transformers/data/data_collator.py#L732
, before "labels" are padded a few line after here :
https://github.com/huggingface/transformers/blob/e84bf1f734f87aa2bedc41b9b9933d00fc6add98/src/transformers/data/data_collator.py#L748
I suspect the conversion to pytorch tensor should be done after labels are also padded by this line and not before.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24546/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24546/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24545
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24545/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24545/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24545/events
|
https://github.com/huggingface/transformers/issues/24545
| 1,778,712,381 |
I_kwDOCUB6oc5qBP89
| 24,545 |
open_llama tokenization modules import
|
{
"login": "npapasarantopoulos",
"id": 115629120,
"node_id": "U_kgDOBuRcQA",
"avatar_url": "https://avatars.githubusercontent.com/u/115629120?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/npapasarantopoulos",
"html_url": "https://github.com/npapasarantopoulos",
"followers_url": "https://api.github.com/users/npapasarantopoulos/followers",
"following_url": "https://api.github.com/users/npapasarantopoulos/following{/other_user}",
"gists_url": "https://api.github.com/users/npapasarantopoulos/gists{/gist_id}",
"starred_url": "https://api.github.com/users/npapasarantopoulos/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/npapasarantopoulos/subscriptions",
"organizations_url": "https://api.github.com/users/npapasarantopoulos/orgs",
"repos_url": "https://api.github.com/users/npapasarantopoulos/repos",
"events_url": "https://api.github.com/users/npapasarantopoulos/events{/privacy}",
"received_events_url": "https://api.github.com/users/npapasarantopoulos/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"There is not a single line in the library trying that import, so it looks like an issue on `freezegun` more than on Transformers.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Just hit this issue too with `freezegun`\r\n\r\n<img width=\"1899\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/4619775/a2149e8a-19ac-493f-8284-c26109621b88\">\r\n",
"While this might be an issue with how freezegun loads dependencies, `freezegun` did not come up with those names: `transformers.models.open_llama.tokenization_open_llama`, `transformers.models.open_llama.tokenization_open_llama_fast`. They are referenced [here](https://github.com/huggingface/transformers/blob/2bd7a27a671fd1d98059124024f580f8f5c0f3b5/src/transformers/models/deprecated/open_llama/__init__.py#L35).\r\n\r\nThe temporary workaround I used is to create stubs for them using [surrogate](https://github.com/ikostia/surrogate) like this:\r\n```\r\ndef pytest_sessionstart(session):\r\n @surrogate('transformers.models.open_llama.tokenization_open_llama')\r\n @surrogate('transformers.models.open_llama.tokenization_open_llama_fast')\r\n def freezegun_initial_import():\r\n pass\r\n freezegun_initial_import()\r\n```",
"@npapasarantopoulos The link you show gives the name `tokenization_open_llama` defined in `transformers.models.deprecated.open_llama`, so it does seem like `freezegun` is making those names up.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"this still seems like a real issue and affects my codebase as well. not sure where/how this should be resolved. FWIW the linked [file](https://github.com/huggingface/transformers/blob/2bd7a27a671fd1d98059124024f580f8f5c0f3b5/src/transformers/models/deprecated/open_llama/__init__.py#L35) above looks pretty wack and requires careful inspection to see whether it is violating import machinery in a way that would cause errors elsewhere\r\n\r\nI tried the workaround of https://github.com/huggingface/transformers/issues/24545#issuecomment-1663496369 but it did not fix things for me",
"in the root `conftest.py`\r\n```\r\nfrom surrogate import surrogate\r\n\r\ndef pytest_sessionstart(session):\r\n surrogate(\"transformers.models.deprecated.open_llama.tokenization_open_llama\").prepare()\r\n```\r\n",
"Another option is to use fixture but the fixture will be called for each worker while pytest_sessionstart is called only once. Pros of the fixture are that `__enter__` and `__exit__` will be called automatically which means base modules will be restored. Use\r\nfixture in case you need base module restoration for some reason.\r\n\r\n```\r\[email protected](scope=\"session\", autouse=True)\r\ndef stub_freezegun_dynamic_imports():\r\n with surrogate(\"transformers.models.deprecated.open_llama.tokenization_open_llama\"):\r\n yield\r\n```",
"Thank you so much:\r\n> ```\r\n> @pytest.fixture(scope=\"session\", autouse=True)\r\n> def stub_freezegun_dynamic_imports():\r\n> with surrogate(\"transformers.models.deprecated.open_llama.tokenization_open_llama\"):\r\n> yield\r\n> ```\r\n\r\n^^ Just saved my code base from a ton of ugly surrogates.\r\n\r\nMy final stub looks like in my root level `conftest.py`:\r\n```\r\n@fixture(scope=\"session\", autouse=True)\r\ndef stub_freezegun_dynamic_imports():\r\n with surrogate(\"transformers.models.deprecated.open_llama.tokenization_open_llama\"):\r\n with surrogate('transformers.models.deprecated.open_llama.tokenization_open_llama_fast'):\r\n with surrogate('transformers.models.open_llama.tokenization_open_llama'):\r\n with surrogate('transformers.models.open_llama.tokenization_open_llama_fast'):\r\n yield\r\n```\r\nWhich still of course beats doing this on every function I needed to do it for",
"Hey, if anyone else experiencing this issue while running unit test on your module you can also configure your freezegun to ignore some packages. \r\n[Documentation here](https://github.com/spulec/freezegun/blob/master/README.rst#ignore-packages)\r\n\r\nIn my use case \r\n`freezegun.config.configure(extend_ignore_list=[\"transformers\"])`\r\nfixed my tests\r\n"
] | 1,687 | 1,701 | 1,693 |
NONE
| null |
### System Info
I stumbled upon this issue, which is not an issue with the OpenLLaMa implementation; I did not try to use OpenLLaMA.
I am importing `transformers.pipeline` in a project including tests using [freezegun](https://github.com/spulec/freezegun) to freeze dates. It seems like freezegun recursively checks all imports of all imported modules, and I am getting the following error:
`ModuleNotFoundError: No module named 'transformers.models.open_llama.tokenization_open_llama'`
and the same for `transformers.models.open_llama.tokenization_open_llama_fast`.
This is probably just an import error, since it seems that open_llama uses `LlamaTokenizer` and `LlamaTokenizerFast`; creating stubs for `transformers.models.open_llama.tokenization_open_llama` and `transformers.models.open_llama.tokenization_open_llama_fast` seems to solve the import issue and tests run just fine.
### Who can help?
@s-JoL
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Minimal code to reproduce:
```
from freezegun import freeze_time
@freeze_time('2022-01-01 12:01:00')
def test_a():
from transformers import pipeline
```
### Expected behavior
no import errors
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24545/reactions",
"total_count": 6,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24545/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24544
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24544/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24544/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24544/events
|
https://github.com/huggingface/transformers/issues/24544
| 1,778,709,403 |
I_kwDOCUB6oc5qBPOb
| 24,544 |
Issue while training Donut model for parsing with custom decoder and tokenizer
|
{
"login": "akashlp27",
"id": 52736048,
"node_id": "MDQ6VXNlcjUyNzM2MDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/52736048?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akashlp27",
"html_url": "https://github.com/akashlp27",
"followers_url": "https://api.github.com/users/akashlp27/followers",
"following_url": "https://api.github.com/users/akashlp27/following{/other_user}",
"gists_url": "https://api.github.com/users/akashlp27/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akashlp27/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akashlp27/subscriptions",
"organizations_url": "https://api.github.com/users/akashlp27/orgs",
"repos_url": "https://api.github.com/users/akashlp27/repos",
"events_url": "https://api.github.com/users/akashlp27/events{/privacy}",
"received_events_url": "https://api.github.com/users/akashlp27/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @akashlp27 \r\n\r\nAs per our [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md), we reserve GitHub issues for bugs in the repository and/or feature requests. For any other matters, we'd like to invite you to use our [forum](https://discuss.huggingface.co/) ๐ค\r\n\r\nHowever, you might want to take a look\r\nhttps://github.com/huggingface/transformers/issues/18190#issuecomment-1273482690\r\nhttps://github.com/huggingface/transformers/issues/18190#issuecomment-1216584872\r\n(Not sure though)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,687 | 1,691 | 1,691 |
NONE
| null |
Hey all, I was trying to train donut model for parsing, which contains Arabic(only) information, in order to achieve this i had collected `Arabic corpus` from various sources and then trained,
1. `Mbart Tokenizer` for arabic corpus.
2. `Mbart decoder` with the same dataset.
Initially the model was training well meaning the loss was decreasing gradually but, during Validation, all my dataset tokens are predicting as `<UNK>` tokens. Because of this the `Normed ED` value is above `0.9` but still the loss is decreasing.
Is there anything I am missing out , any inputs will help a lot. @gwkrsrch , @Vadkoz ,@NielsRogge
Thanks regards.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24544/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24544/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24543
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24543/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24543/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24543/events
|
https://github.com/huggingface/transformers/pull/24543
| 1,778,650,062 |
PR_kwDOCUB6oc5UIC7a
| 24,543 |
[`gpt2-int8`] Add gpt2-xl int8 test
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Can confirm all the tests pass now (with the accelerate PR mentioned above)"
] | 1,687 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
# What does this PR do?
Addresses: https://github.com/huggingface/transformers/pull/24504#discussion_r1243813429
Currently the test is failing because of the following reason (that I need to explore and as discussed offline @sgugger):
1- If a buffer is defined as `persistent=False` in the modeling file
2- and not present in the state_dict
The dispatch_model seems to fail as I face device mismatch issue. I think a potential fix needs to be addressed on accelerate
Facing the same issue for blip2 int8 tests, as the `cos_cached` and `sin_cached` are defined as buffers with persistent=False and not present in the state dict.
Seems that the issue does not happen before 5791d949ff93733c102461ba89c8310745a3fa79 on accelerate
cc @sgugger
EDIT: https://github.com/huggingface/accelerate/pull/1660 should fix the issue I am facing
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24543/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24543/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24543",
"html_url": "https://github.com/huggingface/transformers/pull/24543",
"diff_url": "https://github.com/huggingface/transformers/pull/24543.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24543.patch",
"merged_at": 1687968134000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24542
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24542/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24542/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24542/events
|
https://github.com/huggingface/transformers/issues/24542
| 1,778,388,573 |
I_kwDOCUB6oc5qAA5d
| 24,542 |
Memory leak after repeated inference
|
{
"login": "huu3301",
"id": 44219645,
"node_id": "MDQ6VXNlcjQ0MjE5NjQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/44219645?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/huu3301",
"html_url": "https://github.com/huu3301",
"followers_url": "https://api.github.com/users/huu3301/followers",
"following_url": "https://api.github.com/users/huu3301/following{/other_user}",
"gists_url": "https://api.github.com/users/huu3301/gists{/gist_id}",
"starred_url": "https://api.github.com/users/huu3301/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/huu3301/subscriptions",
"organizations_url": "https://api.github.com/users/huu3301/orgs",
"repos_url": "https://api.github.com/users/huu3301/repos",
"events_url": "https://api.github.com/users/huu3301/events{/privacy}",
"received_events_url": "https://api.github.com/users/huu3301/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @huu3301 \r\n\r\nPlease format the code snippet properly (with proper indent too). See for example\r\n\r\n<img width=\"332\" alt=\"Screenshot 2023-06-27 111112\" src=\"https://github.com/huggingface/transformers/assets/2521628/ec2852fb-695a-456b-b09f-8f99ef0bdd30\">\r\n\r\nRegarding your question about leak:\r\n\r\nDo you only get iteration `62` and `6195` with `b != a`? In this case, there is no memory leak: it's just python process gets to use a bit more memory to perform something under the hood.\r\n\r\n\r\n",
"@ydshieh \r\nThe indent disappeared when I submitted the issue. \r\nI increased the number of iteration to 1000000 , and the memory increment was less than 50kb. There was no memory leak. Thank you for answering."
] | 1,687 | 1,687 | 1,687 |
NONE
| null |
### System Info
- `transformers` version: 4.30.2
- Platform: Linux-3.10.0-1160.90.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.13
- Huggingface_hub version: 0.14.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When I am using the following script, the memory rises slowly and memory leak happend. I change the transformers version and the torch version, but it doesn't work. When using TFBertModel instead of BertModel, or moving model to CPU, the memory leak still exists.
##########
from transformers import BertTokenizer, BertModel
import torch
import psutil
device = "cuda:0"
model_path = "bert-base-uncased"
model = BertModel.from_pretrained(model_path)
tokenizer = BertTokenizer.from_pretrained(model_path)
model.to(device)
model.eval()
query = "Replace me by any text you'd like."
for i in range(10000):
with torch.no_grad():
encoded_input = tokenizer(query, return_tensors='pt').to(device)
a = psutil.Process(os.getpid()).memory_info().rss / 1024 # memory before
output = model(**encoded_input)
b = psutil.Process(os.getpid()).memory_info().rss / 1024 # memory after
if b != a and i > 0:
print(i)
print("b-a=%s kb" % (b - a))
##########
result:
62
b-a=80.0 kb
6195
b-a=8.0 kb
### Expected behavior
Memory is almost stable.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24542/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24542/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24541
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24541/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24541/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24541/events
|
https://github.com/huggingface/transformers/pull/24541
| 1,778,374,028 |
PR_kwDOCUB6oc5UHHac
| 24,541 |
Unpin DeepSpeed and require DS >= 0.9.3
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,687 | 1,687 | 1,687 |
COLLABORATOR
| null |
# What does this PR do?
From @pacman100
> (for accelerate) the minimum DeepSpeed version now is 0.9.3
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24541/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24541/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24541",
"html_url": "https://github.com/huggingface/transformers/pull/24541",
"diff_url": "https://github.com/huggingface/transformers/pull/24541.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24541.patch",
"merged_at": 1687953682000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24540
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24540/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24540/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24540/events
|
https://github.com/huggingface/transformers/issues/24540
| 1,778,270,143 |
I_kwDOCUB6oc5p_j-_
| 24,540 |
Issue Loading 4-bit and 8-bit language models: ValueError: `.to` is not supported for `4-bit` or `8-bit` models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`.
|
{
"login": "DJT777",
"id": 47899472,
"node_id": "MDQ6VXNlcjQ3ODk5NDcy",
"avatar_url": "https://avatars.githubusercontent.com/u/47899472?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DJT777",
"html_url": "https://github.com/DJT777",
"followers_url": "https://api.github.com/users/DJT777/followers",
"following_url": "https://api.github.com/users/DJT777/following{/other_user}",
"gists_url": "https://api.github.com/users/DJT777/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DJT777/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DJT777/subscriptions",
"organizations_url": "https://api.github.com/users/DJT777/orgs",
"repos_url": "https://api.github.com/users/DJT777/repos",
"events_url": "https://api.github.com/users/DJT777/events{/privacy}",
"received_events_url": "https://api.github.com/users/DJT777/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hi @DJT777 \r\nThanks for the report\r\nAre you using the main branch of accelerate + single GPU? If that's the case https://github.com/huggingface/accelerate/pull/1652 should solve the issue. Will try to reproduce later without that fix",
"I wasn't able to test it using that commit. However running everything with the versioning from my June 8th run got the model loaded back up again. I am using this to run the notebook:\r\n\r\n!pip install git+https://www.github.com/huggingface/transformers@2e2088f24b60d8817c74c32a0ac6bb1c5d39544d\r\n!pip install huggingface-hub==0.15.1\r\n!pip install tokenizers==0.13.3\r\n!pip install safetensors==0.3.1\r\n!pip install git+https://github.com/huggingface/accelerate@040f178569fbfe7ab7113af709dc5a7fa09e95bd\r\n!pip install bitsandbytes==0.39.0\r\n!pip install einops==0.6.1\r\n",
"Thanks @DJT777 \r\nCan you try with `pip install git+https://github.com/huggingface/accelerate.git@fix-to-int8` ? ",
"@younesbelkada\r\n\r\nI'll have an attempt at running things again with that.",
"Great thanks! ",
"I went for\r\n\r\n!pip install git+https://github.com/huggingface/transformers.git@6ce6d62b6f20040129ec9831e7c4f6576402ea42\r\n!pip install git+https://github.com/huggingface/accelerate.git@5791d949ff93733c102461ba89c8310745a3fa79\r\n!pip install git+https://github.com/huggingface/peft.git@e2b8e3260d3eeb736edf21a2424e89fe3ecf429d\r\n!pip install transformers[deepspeed]\r\nI had to include transformers[deepspeed] yesterday, and earlier today I had to cherrypick commits to make things work..\r\n\r\nDevelopment is going so fast, hard to keep up with every change ๐
",
"Hi @DJT777 \r\nI just ran the script below:\r\n```python\r\nfrom transformers import AutoModelForCausalLM, AutoConfig, AutoTokenizer\r\nimport torch\r\n\r\nmodel_path=\"tiiuae/falcon-40b-instruct\"\r\n\r\nconfig = AutoConfig.from_pretrained(model_path, trust_remote_code=True)\r\nmodel = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True, load_in_4bit=True, device_map=\"auto\")\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"tiiuae/falcon-40b-instruct\")\r\n\r\ninput_text = \"Describe the solar system.\"\r\ninput_ids = tokenizer(input_text, return_tensors=\"pt\").input_ids.to(\"cuda\")\r\n\r\noutputs = model.generate(input_ids, max_length=10)\r\nprint(tokenizer.decode(outputs[0]))\r\n```\r\nand transformers' main branch & the `fix-to-int8` branch of accelerate and I can confirm the script worked fine. I am running on 2x NVIDIA T4 16GB",
"@younesbelkada \r\n\r\nI'm not able to confirm if it is working in Colab.",
"I get the same error in Google Colab (\"ValueError: `.to` is not supported for `4-bit` or `8-bit` models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`.\"), things were working perfectly well yesterday... Copy-pasting this code in a Colab notebook cell and running it might allow for the reproduction of that error:\r\n```python\r\n!pip install -q -U bitsandbytes\r\n!pip install -q -U git+https://github.com/huggingface/transformers.git\r\n!pip install -q -U git+https://github.com/huggingface/peft.git\r\n!pip install -q -U git+https://github.com/huggingface/accelerate.git\r\n!pip install -q datasets\r\n!pip install -q einops\r\n\r\nimport torch\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig\r\n\r\nmodel_id = \"ybelkada/falcon-7b-sharded-bf16\"\r\nbnb_config = BitsAndBytesConfig(\r\n load_in_4bit=True,\r\n bnb_4bit_use_double_quant=True,\r\n bnb_4bit_quant_type=\"nf4\",\r\n bnb_4bit_compute_dtype=torch.bfloat16\r\n)\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(model_id)\r\nmodel = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config, trust_remote_code=True, device_map={\"\":0})\r\n```\r\nNotebook settings/runtime type are/is:\r\n- Runtime type = Python 3\r\n- GPU = T4",
"Hi @Maaalik \r\nI can confirm the PR mentioned above on accelerate fixes your issue on GColab, can you try on a new runtime / fresh environment: \r\n\r\n```python\r\n!pip install -q -U bitsandbytes\r\n!pip install -q -U git+https://github.com/huggingface/transformers.git\r\n!pip install -q -U git+https://github.com/huggingface/peft.git\r\n!pip install -q -U git+https://github.com/huggingface/accelerate.git@fix-to-int8\r\n!pip install -q datasets\r\n!pip install -q einops\r\n\r\nimport torch\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig\r\n\r\nmodel_id = \"ybelkada/falcon-7b-sharded-bf16\"\r\nbnb_config = BitsAndBytesConfig(\r\n load_in_4bit=True,\r\n bnb_4bit_use_double_quant=True,\r\n bnb_4bit_quant_type=\"nf4\",\r\n bnb_4bit_compute_dtype=torch.bfloat16\r\n)\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(model_id)\r\nmodel = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config, trust_remote_code=True, device_map={\"\":0})\r\n```\r\nI just tested it on GColab",
"Works like a charm! Thank you very much, @younesbelkada!",
"https://github.com/huggingface/accelerate/pull/1652 being merged you can now install `accelerate` from source and it should work",
"@younesbelkada \r\nAll the test case above is using device_map=\"auto\", it also works for me.\r\nBUT:\r\nif I use device_map={'':torch.cuda.current_device()}, the error shows again like:\r\n```\r\nTraceback (most recent call last):\r\n File \"train1.py\", line 124, in <module>\r\n trainer = SFTTrainer(\r\n File \"/usr/local/lib/python3.8/dist-packages/trl/trainer/sft_trainer.py\", line 212, in __init__\r\n super().__init__(\r\n File \"/usr/local/lib/python3.8/dist-packages/transformers/trainer.py\", line 499, in __init__\r\n self._move_model_to_device(model, args.device)\r\n File \"/usr/local/lib/python3.8/dist-packages/transformers/trainer.py\", line 741, in _move_model_to_device\r\n model = model.to(device)\r\n File \"/usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py\", line 1886, in to\r\n raise ValueError(\r\nValueError: `.to` is not supported for `4-bit` or `8-bit` models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`.\r\n```\r\n",
"@younesbelkada \r\nEven if set device_map=\"auto\", if only have 1 GPU, still facing the error:\r\n```\r\nTraceback (most recent call last):\r\n File \"train1.py\", line 124, in <module>\r\n trainer = SFTTrainer(\r\n File \"/usr/local/lib/python3.8/dist-packages/trl/trainer/sft_trainer.py\", line 212, in __init__\r\n super().__init__(\r\n File \"/usr/local/lib/python3.8/dist-packages/transformers/trainer.py\", line 499, in __init__\r\n self._move_model_to_device(model, args.device)\r\n File \"/usr/local/lib/python3.8/dist-packages/transformers/trainer.py\", line 741, in _move_model_to_device\r\n model = model.to(device)\r\n File \"/usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py\", line 1886, in to\r\n raise ValueError(\r\nValueError: `.to` is not supported for `4-bit` or `8-bit` models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`\r\n```",
"@sgugger Sorry another question here =) as above",
"I do not have the answer, no need to tag me.",
"hi @Andcircle \r\nDo you face the same issue with the `main` branch of transformers?\r\n\r\n```\r\npip install -U git+https://github.com/huggingface/transformers.git\r\n```",
"> hi @Andcircle Do you face the same issue with the `main` branch of transformers?\r\n> \r\n> ```\r\n> pip install -U git+https://github.com/huggingface/transformers.git\r\n> ```\r\n\r\nHi @younesbelkada,\r\n\r\nOnce I changed to 4.32.0.dev0, the error \"ValueError: `.to` is not supported for `4-bit` or `8-bit` models.\" is gone. But got new error:\r\n\r\n```ValueError: weight is on the meta device, we need a `value` to put in on 0.\r\nERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 14627) of binary: /usr/bin/python3```\r\n\r\nI load the llama2 7b model like this, then wanna use SFT trainer\r\n```\r\nbnb_config = BitsAndBytesConfig(\r\n load_in_4bit=True,\r\n # load_in_8bit=True,\r\n bnb_4bit_quant_type=\"nf4\",\r\n bnb_4bit_compute_dtype=compute_dtype,\r\n bnb_4bit_use_double_quant=True,\r\n )\r\n\r\n model = AutoModelForCausalLM.from_pretrained(\r\n model_name, quantization_config=bnb_config, trust_remote_code=True, \r\n low_cpu_mem_usage=False,\r\n # device_map={'':torch.cuda.current_device()}\r\n )\r\n```\r\n\r\n@younesbelkada \r\nIf I switch pip install -U git+https://github.com/huggingface/transformers@de9255de27abfcae4a1f816b904915f0b1e23cd9, there's no \"weight is on the meta device\" issue, but it has \"ValueError: `.to` is not supported for `4-bit` or `8-bit` models\" issue for full fine tuning without lora.",
"> > Thanks @DJT777\r\n> > Can you try with `pip install git+https://github.com/huggingface/accelerate.git@fix-to-int8` ?\r\n> \r\n> Using https://github.com/huggingface/accelerate@d1628ee, didn't solve.\r\n\r\nWARNING: Did not find branch or tag 'fix-to-int8', assuming revision or ref.\r\n Running command git checkout -q fix-to-int8\r\n error: pathspec 'fix-to-int8' did not match any file(s) known to git\r\n error: subprocess-exited-with-error\r\n\r\n ร git checkout -q fix-to-int8 did not run successfully.\r\n โ exit code: 1\r\n โฐโ> See above for output.\r\n\r\n note: This error originates from a subprocess, and is likely not a problem with pip.\r\nerror: subprocess-exited-with-error\r\n\r\nร git checkout -q fix-to-int8 did not run successfully.\r\nโ exit code: 1\r\nโฐโ> See above for output.\r\n\r\nnote: This error originates from a subprocess, and is likely not a problem with pip.",
"hi @MrKsiJ \r\nYou can now use with accelerate main branch\r\n```bash\r\npip install -U git+https://github.com/huggingface/accelerate.git\r\n```",
"> hi @MrKsiJ You can now use with accelerate main branch\r\n> \r\n> ```shell\r\n> pip install -U git+https://github.com/huggingface/accelerate.git\r\n> ```\r\n\r\nthe problem is solved, we are moving to another place, but now I have another question how to run peftmodel.from_trained locally without the Internet, if you disable the Internet, then peftmodel.from_trained for some reason still breaks on humbleface, although everything is downloaded at the first launch"
] | 1,687 | 1,698 | 1,687 |
NONE
| null |
### System Info
### System Info
I'm running into an issue where I'm not able to load a 4-bit or 8-bit quantized version of Falcon or LLaMa models. This was working a couple of weeks ago. This is running on Colab. I'm wondering if anyone knows of a fix, or why this is no longer working when it was 2-3 weeks ago around June 8th.
- `transformers` version: 4.31.0.dev0
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.11 (gpu)
- Jax version: 0.4.10
- JaxLib version: 0.4.10
### Who can help?
@ArthurZucker @younesbelkada @sgugger
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Running in Colab on an A100 in Colab PRro
```
!pip install git+https://www.github.com/huggingface/transformers
!pip install git+https://github.com/huggingface/accelerate
!pip install bitsandbytes
!pip install einops
from transformers import AutoModelForCausalLM, AutoConfig, AutoTokenizer
import torch
model_path="tiiuae/falcon-40b-instruct"
config = AutoConfig.from_pretrained(model_path, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True, load_in_4bit=True, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-40b-instruct")
input_text = "Describe the solar system."
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids, max_length=100)
print(tokenizer.decode(outputs[0]))
```
Cell output:
```
Collecting git+https://www.github.com/huggingface/transformers
Cloning https://www.github.com/huggingface/transformers to /tmp/pip-req-build-6pyatvel
Running command git clone --filter=blob:none --quiet https://www.github.com/huggingface/transformers /tmp/pip-req-build-6pyatvel
warning: redirecting to https://github.com/huggingface/transformers.git/
Resolved https://www.github.com/huggingface/transformers to commit e84bf1f734f87aa2bedc41b9b9933d00fc6add98
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from transformers==4.31.0.dev0) (3.12.2)
Collecting huggingface-hub<1.0,>=0.14.1 (from transformers==4.31.0.dev0)
Downloading huggingface_hub-0.15.1-py3-none-any.whl (236 kB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 236.8/236.8 kB 11.6 MB/s eta 0:00:00
Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.10/dist-packages (from transformers==4.31.0.dev0) (1.22.4)
Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.10/dist-packages (from transformers==4.31.0.dev0) (23.1)
Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.10/dist-packages (from transformers==4.31.0.dev0) (6.0)
Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.10/dist-packages (from transformers==4.31.0.dev0) (2022.10.31)
Requirement already satisfied: requests in /usr/local/lib/python3.10/dist-packages (from transformers==4.31.0.dev0) (2.27.1)
Collecting tokenizers!=0.11.3,<0.14,>=0.11.1 (from transformers==4.31.0.dev0)
Downloading tokenizers-0.13.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.8 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 7.8/7.8 MB 114.2 MB/s eta 0:00:00
Collecting safetensors>=0.3.1 (from transformers==4.31.0.dev0)
Downloading safetensors-0.3.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.3 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 1.3/1.3 MB 79.9 MB/s eta 0:00:00
Requirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.10/dist-packages (from transformers==4.31.0.dev0) (4.65.0)
Requirement already satisfied: fsspec in /usr/local/lib/python3.10/dist-packages (from huggingface-hub<1.0,>=0.14.1->transformers==4.31.0.dev0) (2023.6.0)
Requirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.10/dist-packages (from huggingface-hub<1.0,>=0.14.1->transformers==4.31.0.dev0) (4.6.3)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests->transformers==4.31.0.dev0) (1.26.16)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests->transformers==4.31.0.dev0) (2023.5.7)
Requirement already satisfied: charset-normalizer~=2.0.0 in /usr/local/lib/python3.10/dist-packages (from requests->transformers==4.31.0.dev0) (2.0.12)
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests->transformers==4.31.0.dev0) (3.4)
Building wheels for collected packages: transformers
Building wheel for transformers (pyproject.toml) ... done
Created wheel for transformers: filename=transformers-4.31.0.dev0-py3-none-any.whl size=7228417 sha256=5867afa880111a40f7b630e51d9f1709ec1131236a31c2c7fb5f97179e3d1405
Stored in directory: /tmp/pip-ephem-wheel-cache-t06u3u6x/wheels/c1/ac/11/e69d454307e735e14f4f95e575c8be27fd99835ec36f504c13
Successfully built transformers
Installing collected packages: tokenizers, safetensors, huggingface-hub, transformers
Successfully installed huggingface-hub-0.15.1 safetensors-0.3.1 tokenizers-0.13.3 transformers-4.31.0.dev0
Collecting git+https://github.com/huggingface/accelerate
Cloning https://github.com/huggingface/accelerate to /tmp/pip-req-build-76ziff6x
Running command git clone --filter=blob:none --quiet https://github.com/huggingface/accelerate /tmp/pip-req-build-76ziff6x
Resolved https://github.com/huggingface/accelerate to commit d141b4ce794227450a105b7281611c7980e5b3d6
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.10/dist-packages (from accelerate==0.21.0.dev0) (1.22.4)
Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.10/dist-packages (from accelerate==0.21.0.dev0) (23.1)
Requirement already satisfied: psutil in /usr/local/lib/python3.10/dist-packages (from accelerate==0.21.0.dev0) (5.9.5)
Requirement already satisfied: pyyaml in /usr/local/lib/python3.10/dist-packages (from accelerate==0.21.0.dev0) (6.0)
Requirement already satisfied: torch>=1.6.0 in /usr/local/lib/python3.10/dist-packages (from accelerate==0.21.0.dev0) (2.0.1+cu118)
Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from torch>=1.6.0->accelerate==0.21.0.dev0) (3.12.2)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.10/dist-packages (from torch>=1.6.0->accelerate==0.21.0.dev0) (4.6.3)
Requirement already satisfied: sympy in /usr/local/lib/python3.10/dist-packages (from torch>=1.6.0->accelerate==0.21.0.dev0) (1.11.1)
Requirement already satisfied: networkx in /usr/local/lib/python3.10/dist-packages (from torch>=1.6.0->accelerate==0.21.0.dev0) (3.1)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.10/dist-packages (from torch>=1.6.0->accelerate==0.21.0.dev0) (3.1.2)
Requirement already satisfied: triton==2.0.0 in /usr/local/lib/python3.10/dist-packages (from torch>=1.6.0->accelerate==0.21.0.dev0) (2.0.0)
Requirement already satisfied: cmake in /usr/local/lib/python3.10/dist-packages (from triton==2.0.0->torch>=1.6.0->accelerate==0.21.0.dev0) (3.25.2)
Requirement already satisfied: lit in /usr/local/lib/python3.10/dist-packages (from triton==2.0.0->torch>=1.6.0->accelerate==0.21.0.dev0) (16.0.6)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.10/dist-packages (from jinja2->torch>=1.6.0->accelerate==0.21.0.dev0) (2.1.3)
Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.10/dist-packages (from sympy->torch>=1.6.0->accelerate==0.21.0.dev0) (1.3.0)
Building wheels for collected packages: accelerate
Building wheel for accelerate (pyproject.toml) ... done
Created wheel for accelerate: filename=accelerate-0.21.0.dev0-py3-none-any.whl size=234648 sha256=71b98a6d4b1111cc9ca22265f6699cd552325e5f71c83daebe696afd957497ee
Stored in directory: /tmp/pip-ephem-wheel-cache-atmtszgr/wheels/f6/c7/9d/1b8a5ca8353d9307733bc719107acb67acdc95063bba749f26
Successfully built accelerate
Installing collected packages: accelerate
Successfully installed accelerate-0.21.0.dev0
Collecting bitsandbytes
Downloading bitsandbytes-0.39.1-py3-none-any.whl (97.1 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 97.1/97.1 MB 18.8 MB/s eta 0:00:00
Installing collected packages: bitsandbytes
Successfully installed bitsandbytes-0.39.1
Collecting einops
Downloading einops-0.6.1-py3-none-any.whl (42 kB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 42.2/42.2 kB 3.8 MB/s eta 0:00:00
Installing collected packages: einops
Successfully installed einops-0.6.1
Downloading (โฆ)lve/main/config.json: 100%
658/658 [00:00<00:00, 51.8kB/s]
Downloading (โฆ)/configuration_RW.py: 100%
2.51k/2.51k [00:00<00:00, 227kB/s]
A new version of the following files was downloaded from https://huggingface.co/tiiuae/falcon-40b-instruct:
- configuration_RW.py
. Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.
Downloading (โฆ)main/modelling_RW.py: 100%
47.1k/47.1k [00:00<00:00, 3.76MB/s]
A new version of the following files was downloaded from https://huggingface.co/tiiuae/falcon-40b-instruct:
- modelling_RW.py
. Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.
Downloading (โฆ)model.bin.index.json: 100%
39.3k/39.3k [00:00<00:00, 3.46MB/s]
Downloading shards: 100%
9/9 [04:40<00:00, 29.33s/it]
Downloading (โฆ)l-00001-of-00009.bin: 100%
9.50G/9.50G [00:37<00:00, 274MB/s]
Downloading (โฆ)l-00002-of-00009.bin: 100%
9.51G/9.51G [00:33<00:00, 340MB/s]
Downloading (โฆ)l-00003-of-00009.bin: 100%
9.51G/9.51G [00:28<00:00, 320MB/s]
Downloading (โฆ)l-00004-of-00009.bin: 100%
9.51G/9.51G [00:33<00:00, 317MB/s]
Downloading (โฆ)l-00005-of-00009.bin: 100%
9.51G/9.51G [00:27<00:00, 210MB/s]
Downloading (โฆ)l-00006-of-00009.bin: 100%
9.51G/9.51G [00:34<00:00, 180MB/s]
Downloading (โฆ)l-00007-of-00009.bin: 100%
9.51G/9.51G [00:27<00:00, 307MB/s]
Downloading (โฆ)l-00008-of-00009.bin: 100%
9.51G/9.51G [00:27<00:00, 504MB/s]
Downloading (โฆ)l-00009-of-00009.bin: 100%
7.58G/7.58G [00:27<00:00, 315MB/s]
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please run
python -m bitsandbytes
and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
bin /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda118.so
CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths...
CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so
CUDA SETUP: Highest compute capability among GPUs detected: 8.0
CUDA SETUP: Detected CUDA version 118
CUDA SETUP: Loading binary /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda118.so...
/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: /usr/lib64-nvidia did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] as expected! Searching further paths...
warn(msg)
/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/sys/fs/cgroup/memory.events /var/colab/cgroup/jupyter-children/memory.events')}
warn(msg)
/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('//172.28.0.1'), PosixPath('8013'), PosixPath('http')}
warn(msg)
/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('//colab.research.google.com/tun/m/cc48301118ce562b961b3c22d803539adc1e0c19/gpu-a100-s-b20acq94qsrp --tunnel_background_save_delay=10s --tunnel_periodic_background_save_frequency=30m0s --enable_output_coalescing=true --output_coalescing_required=true'), PosixPath('--logtostderr --listen_host=172.28.0.12 --target_host=172.28.0.12 --tunnel_background_save_url=https')}
warn(msg)
/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/env/python')}
warn(msg)
/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('//ipykernel.pylab.backend_inline'), PosixPath('module')}
warn(msg)
/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: Found duplicate ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] files: {PosixPath('/usr/local/cuda/lib64/libcudart.so'), PosixPath('/usr/local/cuda/lib64/libcudart.so.11.0')}.. We'll flip a coin and try one of these, in order to fail forward.
Either way, this might cause trouble in the future:
If you get `CUDA error: invalid device function` errors, the above might be the cause and the solution is to make sure only one ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] in the paths that we search based on your env.
warn(msg)
Loading checkpoint shards: 100%
9/9 [05:45<00:00, 35.83s/it]
Downloading (โฆ)neration_config.json: 100%
111/111 [00:00<00:00, 10.3kB/s]
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-1-c89997e10ae9>](https://localhost:8080/#) in <cell line: 15>()
13
14 config = AutoConfig.from_pretrained(model_path, trust_remote_code=True)
---> 15 model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True, load_in_4bit=True, device_map="auto")
16
17 tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-40b-instruct")
3 frames
[/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py](https://localhost:8080/#) in to(self, *args, **kwargs)
1894 # Checks if the model has been loaded in 8-bit
1895 if getattr(self, "is_quantized", False):
-> 1896 raise ValueError(
1897 "`.to` is not supported for `4-bit` or `8-bit` models. Please use the model as it is, since the"
1898 " model has already been set to the correct devices and casted to the correct `dtype`."
ValueError: `.to` is not supported for `4-bit` or `8-bit` models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`.
```
### Expected behavior
Model should be loaded and able to run inference.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24540/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/24540/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24539
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24539/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24539/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24539/events
|
https://github.com/huggingface/transformers/issues/24539
| 1,778,265,924 |
I_kwDOCUB6oc5p_i9E
| 24,539 |
4-Bit and 8-Bit Models not being loaded
|
{
"login": "DJT777",
"id": 47899472,
"node_id": "MDQ6VXNlcjQ3ODk5NDcy",
"avatar_url": "https://avatars.githubusercontent.com/u/47899472?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DJT777",
"html_url": "https://github.com/DJT777",
"followers_url": "https://api.github.com/users/DJT777/followers",
"following_url": "https://api.github.com/users/DJT777/following{/other_user}",
"gists_url": "https://api.github.com/users/DJT777/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DJT777/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DJT777/subscriptions",
"organizations_url": "https://api.github.com/users/DJT777/orgs",
"repos_url": "https://api.github.com/users/DJT777/repos",
"events_url": "https://api.github.com/users/DJT777/events{/privacy}",
"received_events_url": "https://api.github.com/users/DJT777/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hello I know it's a bit off-topic. I'm actually never inferred a model than runs in either 4-bit and 8-bit, but i'm really wondering. Does the model will inference much slower if we infer it in 8-bit quantized model or will be much faster than the fp16/fp32 models?\r\n\r\nThank you in advance if you have this kind of information and wanted to share"
] | 1,687 | 1,687 | 1,687 |
NONE
| null |
### System Info
I'm running into an issue where I'm not able to load a 4-bit or 8-bit quantized version of Falcon or LLaMa models. This was working a couple of weeks ago. This is running on Colab. I'm wondering if anyone knows of a fix, or why this is no longer working when it was 2-3 weeks ago around June 8th.
- `transformers` version: 4.31.0.dev0
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.11 (gpu)
- Jax version: 0.4.10
- JaxLib version: 0.4.10
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Running in Colab on an A100 in Colab PRro
```
!pip install git+https://www.github.com/huggingface/transformers
!pip install git+https://github.com/huggingface/accelerate
!pip install bitsandbytes
!pip install einops
from transformers import AutoModelForCausalLM, AutoConfig, AutoTokenizer
import torch
model_path="tiiuae/falcon-40b-instruct"
config = AutoConfig.from_pretrained(model_path, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True, load_in_4bit=True, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-40b-instruct")
input_text = "Describe the solar system."
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids, max_length=100)
print(tokenizer.decode(outputs[0]))
```
Cell output:
```
Collecting git+https://www.github.com/huggingface/transformers
Cloning https://www.github.com/huggingface/transformers to /tmp/pip-req-build-6pyatvel
Running command git clone --filter=blob:none --quiet https://www.github.com/huggingface/transformers /tmp/pip-req-build-6pyatvel
warning: redirecting to https://github.com/huggingface/transformers.git/
Resolved https://www.github.com/huggingface/transformers to commit e84bf1f734f87aa2bedc41b9b9933d00fc6add98
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from transformers==4.31.0.dev0) (3.12.2)
Collecting huggingface-hub<1.0,>=0.14.1 (from transformers==4.31.0.dev0)
Downloading huggingface_hub-0.15.1-py3-none-any.whl (236 kB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 236.8/236.8 kB 11.6 MB/s eta 0:00:00
Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.10/dist-packages (from transformers==4.31.0.dev0) (1.22.4)
Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.10/dist-packages (from transformers==4.31.0.dev0) (23.1)
Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.10/dist-packages (from transformers==4.31.0.dev0) (6.0)
Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.10/dist-packages (from transformers==4.31.0.dev0) (2022.10.31)
Requirement already satisfied: requests in /usr/local/lib/python3.10/dist-packages (from transformers==4.31.0.dev0) (2.27.1)
Collecting tokenizers!=0.11.3,<0.14,>=0.11.1 (from transformers==4.31.0.dev0)
Downloading tokenizers-0.13.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.8 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 7.8/7.8 MB 114.2 MB/s eta 0:00:00
Collecting safetensors>=0.3.1 (from transformers==4.31.0.dev0)
Downloading safetensors-0.3.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.3 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 1.3/1.3 MB 79.9 MB/s eta 0:00:00
Requirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.10/dist-packages (from transformers==4.31.0.dev0) (4.65.0)
Requirement already satisfied: fsspec in /usr/local/lib/python3.10/dist-packages (from huggingface-hub<1.0,>=0.14.1->transformers==4.31.0.dev0) (2023.6.0)
Requirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.10/dist-packages (from huggingface-hub<1.0,>=0.14.1->transformers==4.31.0.dev0) (4.6.3)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests->transformers==4.31.0.dev0) (1.26.16)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests->transformers==4.31.0.dev0) (2023.5.7)
Requirement already satisfied: charset-normalizer~=2.0.0 in /usr/local/lib/python3.10/dist-packages (from requests->transformers==4.31.0.dev0) (2.0.12)
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests->transformers==4.31.0.dev0) (3.4)
Building wheels for collected packages: transformers
Building wheel for transformers (pyproject.toml) ... done
Created wheel for transformers: filename=transformers-4.31.0.dev0-py3-none-any.whl size=7228417 sha256=5867afa880111a40f7b630e51d9f1709ec1131236a31c2c7fb5f97179e3d1405
Stored in directory: /tmp/pip-ephem-wheel-cache-t06u3u6x/wheels/c1/ac/11/e69d454307e735e14f4f95e575c8be27fd99835ec36f504c13
Successfully built transformers
Installing collected packages: tokenizers, safetensors, huggingface-hub, transformers
Successfully installed huggingface-hub-0.15.1 safetensors-0.3.1 tokenizers-0.13.3 transformers-4.31.0.dev0
Collecting git+https://github.com/huggingface/accelerate
Cloning https://github.com/huggingface/accelerate to /tmp/pip-req-build-76ziff6x
Running command git clone --filter=blob:none --quiet https://github.com/huggingface/accelerate /tmp/pip-req-build-76ziff6x
Resolved https://github.com/huggingface/accelerate to commit d141b4ce794227450a105b7281611c7980e5b3d6
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.10/dist-packages (from accelerate==0.21.0.dev0) (1.22.4)
Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.10/dist-packages (from accelerate==0.21.0.dev0) (23.1)
Requirement already satisfied: psutil in /usr/local/lib/python3.10/dist-packages (from accelerate==0.21.0.dev0) (5.9.5)
Requirement already satisfied: pyyaml in /usr/local/lib/python3.10/dist-packages (from accelerate==0.21.0.dev0) (6.0)
Requirement already satisfied: torch>=1.6.0 in /usr/local/lib/python3.10/dist-packages (from accelerate==0.21.0.dev0) (2.0.1+cu118)
Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from torch>=1.6.0->accelerate==0.21.0.dev0) (3.12.2)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.10/dist-packages (from torch>=1.6.0->accelerate==0.21.0.dev0) (4.6.3)
Requirement already satisfied: sympy in /usr/local/lib/python3.10/dist-packages (from torch>=1.6.0->accelerate==0.21.0.dev0) (1.11.1)
Requirement already satisfied: networkx in /usr/local/lib/python3.10/dist-packages (from torch>=1.6.0->accelerate==0.21.0.dev0) (3.1)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.10/dist-packages (from torch>=1.6.0->accelerate==0.21.0.dev0) (3.1.2)
Requirement already satisfied: triton==2.0.0 in /usr/local/lib/python3.10/dist-packages (from torch>=1.6.0->accelerate==0.21.0.dev0) (2.0.0)
Requirement already satisfied: cmake in /usr/local/lib/python3.10/dist-packages (from triton==2.0.0->torch>=1.6.0->accelerate==0.21.0.dev0) (3.25.2)
Requirement already satisfied: lit in /usr/local/lib/python3.10/dist-packages (from triton==2.0.0->torch>=1.6.0->accelerate==0.21.0.dev0) (16.0.6)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.10/dist-packages (from jinja2->torch>=1.6.0->accelerate==0.21.0.dev0) (2.1.3)
Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.10/dist-packages (from sympy->torch>=1.6.0->accelerate==0.21.0.dev0) (1.3.0)
Building wheels for collected packages: accelerate
Building wheel for accelerate (pyproject.toml) ... done
Created wheel for accelerate: filename=accelerate-0.21.0.dev0-py3-none-any.whl size=234648 sha256=71b98a6d4b1111cc9ca22265f6699cd552325e5f71c83daebe696afd957497ee
Stored in directory: /tmp/pip-ephem-wheel-cache-atmtszgr/wheels/f6/c7/9d/1b8a5ca8353d9307733bc719107acb67acdc95063bba749f26
Successfully built accelerate
Installing collected packages: accelerate
Successfully installed accelerate-0.21.0.dev0
Collecting bitsandbytes
Downloading bitsandbytes-0.39.1-py3-none-any.whl (97.1 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 97.1/97.1 MB 18.8 MB/s eta 0:00:00
Installing collected packages: bitsandbytes
Successfully installed bitsandbytes-0.39.1
Collecting einops
Downloading einops-0.6.1-py3-none-any.whl (42 kB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 42.2/42.2 kB 3.8 MB/s eta 0:00:00
Installing collected packages: einops
Successfully installed einops-0.6.1
Downloading (โฆ)lve/main/config.json: 100%
658/658 [00:00<00:00, 51.8kB/s]
Downloading (โฆ)/configuration_RW.py: 100%
2.51k/2.51k [00:00<00:00, 227kB/s]
A new version of the following files was downloaded from https://huggingface.co/tiiuae/falcon-40b-instruct:
- configuration_RW.py
. Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.
Downloading (โฆ)main/modelling_RW.py: 100%
47.1k/47.1k [00:00<00:00, 3.76MB/s]
A new version of the following files was downloaded from https://huggingface.co/tiiuae/falcon-40b-instruct:
- modelling_RW.py
. Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.
Downloading (โฆ)model.bin.index.json: 100%
39.3k/39.3k [00:00<00:00, 3.46MB/s]
Downloading shards: 100%
9/9 [04:40<00:00, 29.33s/it]
Downloading (โฆ)l-00001-of-00009.bin: 100%
9.50G/9.50G [00:37<00:00, 274MB/s]
Downloading (โฆ)l-00002-of-00009.bin: 100%
9.51G/9.51G [00:33<00:00, 340MB/s]
Downloading (โฆ)l-00003-of-00009.bin: 100%
9.51G/9.51G [00:28<00:00, 320MB/s]
Downloading (โฆ)l-00004-of-00009.bin: 100%
9.51G/9.51G [00:33<00:00, 317MB/s]
Downloading (โฆ)l-00005-of-00009.bin: 100%
9.51G/9.51G [00:27<00:00, 210MB/s]
Downloading (โฆ)l-00006-of-00009.bin: 100%
9.51G/9.51G [00:34<00:00, 180MB/s]
Downloading (โฆ)l-00007-of-00009.bin: 100%
9.51G/9.51G [00:27<00:00, 307MB/s]
Downloading (โฆ)l-00008-of-00009.bin: 100%
9.51G/9.51G [00:27<00:00, 504MB/s]
Downloading (โฆ)l-00009-of-00009.bin: 100%
7.58G/7.58G [00:27<00:00, 315MB/s]
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please run
python -m bitsandbytes
and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
bin /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda118.so
CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths...
CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so
CUDA SETUP: Highest compute capability among GPUs detected: 8.0
CUDA SETUP: Detected CUDA version 118
CUDA SETUP: Loading binary /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda118.so...
/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: /usr/lib64-nvidia did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] as expected! Searching further paths...
warn(msg)
/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/sys/fs/cgroup/memory.events /var/colab/cgroup/jupyter-children/memory.events')}
warn(msg)
/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('//172.28.0.1'), PosixPath('8013'), PosixPath('http')}
warn(msg)
/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('//colab.research.google.com/tun/m/cc48301118ce562b961b3c22d803539adc1e0c19/gpu-a100-s-b20acq94qsrp --tunnel_background_save_delay=10s --tunnel_periodic_background_save_frequency=30m0s --enable_output_coalescing=true --output_coalescing_required=true'), PosixPath('--logtostderr --listen_host=172.28.0.12 --target_host=172.28.0.12 --tunnel_background_save_url=https')}
warn(msg)
/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/env/python')}
warn(msg)
/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('//ipykernel.pylab.backend_inline'), PosixPath('module')}
warn(msg)
/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: Found duplicate ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] files: {PosixPath('/usr/local/cuda/lib64/libcudart.so'), PosixPath('/usr/local/cuda/lib64/libcudart.so.11.0')}.. We'll flip a coin and try one of these, in order to fail forward.
Either way, this might cause trouble in the future:
If you get `CUDA error: invalid device function` errors, the above might be the cause and the solution is to make sure only one ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] in the paths that we search based on your env.
warn(msg)
Loading checkpoint shards: 100%
9/9 [05:45<00:00, 35.83s/it]
Downloading (โฆ)neration_config.json: 100%
111/111 [00:00<00:00, 10.3kB/s]
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-1-c89997e10ae9>](https://localhost:8080/#) in <cell line: 15>()
13
14 config = AutoConfig.from_pretrained(model_path, trust_remote_code=True)
---> 15 model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True, load_in_4bit=True, device_map="auto")
16
17 tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-40b-instruct")
3 frames
[/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py](https://localhost:8080/#) in to(self, *args, **kwargs)
1894 # Checks if the model has been loaded in 8-bit
1895 if getattr(self, "is_quantized", False):
-> 1896 raise ValueError(
1897 "`.to` is not supported for `4-bit` or `8-bit` models. Please use the model as it is, since the"
1898 " model has already been set to the correct devices and casted to the correct `dtype`."
ValueError: `.to` is not supported for `4-bit` or `8-bit` models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`.
```
### Expected behavior
Model should be loaded and able to run inference.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24539/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24539/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24538
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24538/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24538/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24538/events
|
https://github.com/huggingface/transformers/issues/24538
| 1,778,239,773 |
I_kwDOCUB6oc5p_ckd
| 24,538 |
Incorrect typing of `fsdp`, `fsdp_config`, and `sharded_ddp` in `TrainingArguments`
|
{
"login": "O-T-O-Z",
"id": 60617932,
"node_id": "MDQ6VXNlcjYwNjE3OTMy",
"avatar_url": "https://avatars.githubusercontent.com/u/60617932?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/O-T-O-Z",
"html_url": "https://github.com/O-T-O-Z",
"followers_url": "https://api.github.com/users/O-T-O-Z/followers",
"following_url": "https://api.github.com/users/O-T-O-Z/following{/other_user}",
"gists_url": "https://api.github.com/users/O-T-O-Z/gists{/gist_id}",
"starred_url": "https://api.github.com/users/O-T-O-Z/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/O-T-O-Z/subscriptions",
"organizations_url": "https://api.github.com/users/O-T-O-Z/orgs",
"repos_url": "https://api.github.com/users/O-T-O-Z/repos",
"events_url": "https://api.github.com/users/O-T-O-Z/events{/privacy}",
"received_events_url": "https://api.github.com/users/O-T-O-Z/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"We do not support any automatic type checker and our type annotations are only here to give documentation. We welcome PRs to make them more exact as long as it's not at the cost of code readability but the bottomline is that you shouldn't use a type-checker with hard errors on Transformers.",
"@sgugger So should I create a PR for this?"
] | 1,687 | 1,689 | 1,689 |
NONE
| null |
### System Info
- `transformers` version: 4.29.2
- Platform: macOS-14.0-arm64-arm-64bit
- Python version: 3.10.6
- Huggingface_hub version: 0.15.1
- Safetensors version: not installed
- PyTorch version (GPU?): 1.13.0.dev20220902 (False)
- Tensorflow version (GPU?): 2.9.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
When initializing a `pydantic.BaseModel` as follows:
```python
from pydantic import BaseModel
from transformers.training_args import TrainingArguments
class MyTrainingArguments(TrainingArguments):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.my_arg = "my_arg"
class MyModel(BaseModel):
training_args: MyTrainingArguments
model = MyModel(training_args=MyTrainingArguments(output_dir=""))
```
The following `ValidationErrors` occur:
```shell
ValidationError: 4 validation errors for MyModel
training_args -> debug
str type expected (type=type_error.str)
training_args -> sharded_ddp
str type expected (type=type_error.str)
training_args -> fsdp
str type expected (type=type_error.str)
training_args -> fsdp_config
str type expected (type=type_error.str)
```
Since `debug` has been fixed in #24033, my main concern are the others.
After investigation, I discovered that the `__post_init__()`-method changes these parameters from their default `str` values to for example `dict`, `bool`, or `List`. This becomes a problem for Pydantic (and other type-checkers) since the validation will be incorrect, while the docstring of `TrainingArguments` describes the following for these parameters:
```python
"""
sharded_ddp (`bool`, `str` or list of [`~trainer_utils.ShardedDDPOption`], *optional*, defaults to `False`)
fsdp (`bool`, `str` or list of [`~trainer_utils.FSDPOption`], *optional*, defaults to `False`)
fsdp_config (`str` or `dict`, *optional*)
"""
```
### Expected behavior
I would like to resolve these issues by providing the correct typehinting. This could look as follows:
```python
sharded_ddp: Union[Optional[str], bool, List[ShardedDDPOption]]
fsdp: Union[Optional[str], bool, List[FSDPOption]]
fsdp_config: Union[Optional[str], Dict]
```
I checked this configuration and it resolves the issue.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24538/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24538/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24537
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24537/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24537/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24537/events
|
https://github.com/huggingface/transformers/issues/24537
| 1,778,104,540 |
I_kwDOCUB6oc5p-7jc
| 24,537 |
Finetuning Whisper with multi-languages
|
{
"login": "LYPinASR",
"id": 112866899,
"node_id": "U_kgDOBro2Uw",
"avatar_url": "https://avatars.githubusercontent.com/u/112866899?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LYPinASR",
"html_url": "https://github.com/LYPinASR",
"followers_url": "https://api.github.com/users/LYPinASR/followers",
"following_url": "https://api.github.com/users/LYPinASR/following{/other_user}",
"gists_url": "https://api.github.com/users/LYPinASR/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LYPinASR/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LYPinASR/subscriptions",
"organizations_url": "https://api.github.com/users/LYPinASR/orgs",
"repos_url": "https://api.github.com/users/LYPinASR/repos",
"events_url": "https://api.github.com/users/LYPinASR/events{/privacy}",
"received_events_url": "https://api.github.com/users/LYPinASR/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,687 | 1,687 | 1,687 |
NONE
| null |
### Feature request
Finetuning Whisper with multi-languages
### Motivation
Finetuning Whisper with multi-languages
### Your contribution
Finetuning Whisper with multi-languages
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24537/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24537/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24536
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24536/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24536/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24536/events
|
https://github.com/huggingface/transformers/issues/24536
| 1,778,047,886 |
I_kwDOCUB6oc5p-tuO
| 24,536 |
Add Classifier-Free Guidance sampling
|
{
"login": "Vermeille",
"id": 1108219,
"node_id": "MDQ6VXNlcjExMDgyMTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1108219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Vermeille",
"html_url": "https://github.com/Vermeille",
"followers_url": "https://api.github.com/users/Vermeille/followers",
"following_url": "https://api.github.com/users/Vermeille/following{/other_user}",
"gists_url": "https://api.github.com/users/Vermeille/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Vermeille/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Vermeille/subscriptions",
"organizations_url": "https://api.github.com/users/Vermeille/orgs",
"repos_url": "https://api.github.com/users/Vermeille/repos",
"events_url": "https://api.github.com/users/Vermeille/events{/privacy}",
"received_events_url": "https://api.github.com/users/Vermeille/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @gante \r\nBut let's see if the community requests this added feature before implementing it in the library proper :-)",
"Hey @Vermeille ๐ \r\n\r\nI have the impression that our MusicGen PR (still open, expected to get merged soon) introduces the bulk of the logic to make it happen -- see [this file](https://github.com/huggingface/transformers/pull/24109/files#diff-d23b812af8462833ad280d968f3e6e2ee7558bacfc2716cdde44a07bead5e065R1070)\r\n\r\nIt is the same thing with a slightly different code implementation, correct? In the MusicGen PR, the model does a forward pass with 2x the batch size, where half of the batch corresponds to the unprompted tokens",
"Indeed @gante !\r\n\r\nI don't fully get how the 2x batch size thing works, but if it does, it's cool.\r\nThe paper makes some more additions to that base implementation:\r\n1) the `uncond_logits` might in fact have a different prompt than the `cond_logits`, which is commonly called \"negative prompt\".\r\n2) the comment says \"usually at the expense of poorer quality\". This can be mitigated with linearly interpolating the cfg `scores` back with with the initial `scores`\r\n3) We had better results `log_softmax`ing both scores before cfg, which normalizes both logits sets to a common \"scale\".",
"cc @sanchit-gandhi, who's probably better equipped to comment on potential differences :)",
"Hey @Vermeille - thanks for the comprehensive write-up! Just a clarifying question: in your implementation, how do you construct the token ids for the model based on the conditional ids and the un-conditional ones? You mention:\r\n\r\n> inputs_cfg usually is the last token of the prompt but there are\r\n\r\nWhich suggests you concatenate them together in the same batch item?\r\n\r\nIn MusicGen (and also the HF Diffusers library for models like Stable Diffusion), we construct our input ids by concatenating the input ids for the conditional prompt and the un-conditional prompt along the batch dimension (`dim=0`):\r\n\r\n```python\r\ninput_ids = torch.concatenate([conditional_ids, unconditional_ids], dim=0)\r\n```\r\n\r\nThis is what's referred to by the 2x batch size 'trick' (concatenating the conditional prompt and unconditional prompt over the batch dim). There's no restriction to how these unconditional ids are formed - they can be from a 'null' input, or from a negative prompt. So we can do negative prompting in exactly the way you've described.\r\n\r\nWhen we run our model forward, the logits for the first half of the batch corresponds to the conditional prompt, and the second half to the unconditional prompt (or negative prompt if we use one).\r\n\r\nBy splitting along the batch dim, we can partition the conditional logits and the unconditional ones:\r\n\r\n```python\r\nconditional_logits, unconditional_logits = torch.split(logits, batch_size // 2)\r\n```\r\n\r\n-> we then perform our weighted sum over the conditional and unconditional logits for CFG.\r\n\r\nHope that explains how the 2x batch size trick works - would be keen to hear whether this aligns with how you've run CFG in your experiments.\r\n\r\nRegarding implementing a new logits processor, we'd probably want to add this new logits processor when the time comes for integrating the model you've worked on into `transformers`, rather than adding it solely as a standalone logits processor. `transformers` is less of a modular toolbox for building new models, more a library for housing the most popular OS ML models\r\n\r\nHave you trained a new model that uses this processor? Or built on-top of an existing one? (if it's the latter, then adding the CFG logits processor standalone makes sense, otherwise let's integrate it all in one go)",
"Thank you for your detailed answer @sanchit-gandhi !\r\n\r\nThe part I'm the most unclear with regarding the 2x batch trick is how the sampling happen. Do you actually sample the same continuation token for the conditional and unconditional branch, or do they diverge in their own direction (which would be weird imho)?\r\n\r\nRegarding the integration, _there is no need to train models to support CFG, it works out of the box_. The paper will be out in few days, but as you can see on the figures, we employed it with LLaMA models, all Pythias, GPT-2 family, and even GPT4All. We don't train a new model. It's meant to be an addition to the .generate() method that is totally model agnostic and don't need training nor finetuning. Hence the PR with the standalone logits processor :)",
"[The paper is out](https://arxiv.org/abs/2306.17806)",
"Maybe this helps!\r\n\r\nPre-processing:\r\n* conditional text -> `conditional_ids` (bsz)\r\n* negative text -> `unconditional_ids` (bsz)\r\n* `input_ids` = `[conditional_ids, unconditional_ids]` (2 * bsz since we've done a concat)\r\n\r\nForward pass:\r\n* `logits` (2 * bsz since they come from the `input_ids`)\r\n\r\nCFG:\r\n* `conditional_logits`, `unconditional_logits` = `logits[:bsz]`, `logits[bsz:]` (so each one is bsz since we've done a split)\r\n* `scores` = weighted_sum(`conditional_logits`, `unconditional_logits`; `guidance_scale`) (bsz)\r\n\r\nSampling:\r\n* next token = sample(`scores`) (bsz num tokens -> we combined the cond/uncond logits to get the scores, so we only have bsz `scores`, and thus bsz num tokens)\r\n\r\nHow have you been getting the conditional and unconditional logits in your experiments? Through two forward passes? (one with the conditional inputs and then a second with the unconditional ones). This batch size concatenation trick means you only have to run one forward pass, but with 2x the batch size\r\n\r\nThe only pain point I see with getting this work in `transformers` is this batch size change as we go from our forward pass to our sampling loop. But we can add some logic to change the batch size on the fly if we're doing CFG (kind of like we did for MusicGen @gante - we need to trick the forward pass into using 2 * bsz, then the decoder ids to use bsz).\r\n\r\n> _here is no need to train models to support CFG, it works out of the box_\r\n\r\nVery cool indeed! Would be nice to have this as a standalone PR then as suggested",
"Thank you!\r\nYeah if the cond and uncond prompts gets the same next token sampled, it's good wrt to our experiments! That's how you manage to loop around in the .generate() to grow the continuation token per token and zigzaging between bsz and 2bsz that I'm not 100% clear with. I totally see how it works for _one_ forward pass. Totally an implementation detail :) But apparently that's a new trick you had to implement for MusicGen too so it makes sense that I'm not perfectly clear with that.\r\n\r\n> Would be nice to have this as a standalone PR then as suggested\r\n\r\nI'm happy to address the changes that have to be made to contribute this into the lib :)",
"Awesome - feel free to open a PR and tag myself and @gante! How do you do it without the 2x batch size trick? Do you do two forward passes? Just asking in case there's a simpler way we can integrate this!",
"(catching up on the paper and thinking a bit about usage experience -- will comment tomorrow with specific suggestions, but I think @Vermeille's suggested implementation above will be pretty close to a great user experience with minimal compute overhead)",
"here is an alternative implementation we used for some of our other experiments in the paper, for your consideration.\r\n\r\nit was designed with huggingface's typical `*ModelFor*` code-style in mind, which just puts the base model in the `init` and extends the `forward()` method\r\nhttps://github.com/Vermeille/lm-evaluation-harness-cfg/blob/cfg-alex/log_logits_on_p3.py#L30-L97",
"> Awesome - feel free to open a PR and tag myself and @gante! How do you do it without the 2x batch size trick? Do you do two forward passes? Just asking in case there's a simpler way we can integrate this!\r\n\r\nYes. Two consecutive passes. Which is indeed not that great wrt latency.",
"Would be great to have both the 2x batch size and two forward passes. Since 2x batch size is better for throughput but the two forward passes are much better for VRAM usage, as the Paper outlines\r\n\r\n(unless I missunderstood)",
"So given you already have this ( https://github.com/huggingface/transformers/blob/main/src/transformers/generation/logits_process.py#L1070 )\r\n\r\nWhat do you want me to add / change in the PR?",
"> Would be great to have both the 2x batch size and two forward passes. Since 2x batch size is better for throughput but the two forward passes are much better for VRAM usage, as the Paper outlines\r\n> \r\n> (unless I missunderstood)\r\n\r\nThis is correct: our focus was on getting the best results for a fixed amount of VRAM in our experiments. Hence it didn't occur to us to simply 2x the batch size. I agree that having this be togglable is a good idea and don't have any preference about the default.",
"The application to LLMs seems more of a situational sampling technique. With smaller conditional generative models like MusicGen, trained from-scratch with (explicit) condition dropout, it's practically part of the model. MusicGen isn't the first AR Transformer here, last year's DALL-E Mega [already did it](https://github.com/borisdayma/dalle-mini/blob/cb2cf37d07a83a92f37b5e1e0568efdb89e52812/src/dalle_mini/model/modeling.py#L1896) (itself inspired by https://twitter.com/RiversHaveWings/status/1478093658716966912 ), and in these models it's essential for performance.\r\n\r\nSo I'd expect \"batch size 1 dramatically underutilizes available resources\" to be the more common case.\r\n\r\n> Since 2x batch size is better for throughput but the two forward passes are much better for VRAM usage, as the Paper outlines\r\n\r\nDepending on model and hardware, \"biggest batch size that fits\" isn't necessarily optimal. On decent hardware, you can hit optimal compute utilisation before VRAM limits with batched inference in smaller models.\r\n\r\n---\r\n\r\nNormalizing the summands, then interpolating with the original scores is intriguing. If adding this to the CFG implementation that's now in Transformers is still being considered, this would be unexpected as default behavior though. In diffusion models, it's not applicable, and in sequence prediction, I've only seen people combine the unnormalized scores.",
"@drdaxxy \r\n\r\n> Normalizing the summands, then interpolating with the original scores is intriguing. [...] In diffusion models, it's not applicable\r\n\r\nThis is a technique we borrowed from [Common Diffusion Noise Schedules and Sample Steps are Flawed](https://arxiv.org/abs/2305.08891) they call CFG Rescale. You can see [Imagen](https://arxiv.org/abs/2205.11487) doing some normalizing trick too.\r\n\r\n> in sequence prediction, I've only seen people combine the unnormalized scores.\r\n\r\nThat's what we started with, and our results were a little bit worse.",
"This method is interesting to implement from an engineering and maintenance point of view!\r\n\r\nThe simplest approach would be to proceed as @Vermeille suggested: add a logits processor that calls a model forward pass for the unconditional part of the input. It would be a small self-contained piece of code, which means low long-term maintenance on our end. On the negative side, we have the 2x latency, which is more impactful than the extra VRAM (IMO).\r\n\r\nIf we go the 2x batch size route, we need to implement a function like `greedy_search` or `sample` -- a long function with non-negligible maintenance costs on our end. I believe this would be the best form of CFG sampling. However, we are severely constrained by our ability to keep the machine up and running at a good pace, so we can quickly add new features like CFG sampling :D \r\n\r\nWe have a plan to reorganize `generate` such that it is entirely made of small functions, making it much more composable. In the way I'm envisioning it, the 2x batch size version of CFG sampling would need a few extra lines of code, as opposed to a new large function. \r\n\r\nHow about we go with @Vermeille's proposal now, which will make CFG sampling available this week with low overhead on our end, and we implement the 2x batch size version after the `generate` refactor is complete? The new logits processor class would need a different name, as we already have `ClassifierFreeGuidanceLogitsProcessor` for the 2x batch size case (perhaps `UnbatchedClassifierFreeGuidanceLogitsProcessor`?)",
"Expect a PR in few hours.\r\n\r\nThank you for your interest and answers!",
"@gante There is a name clash for the arguments to .generate(). For this PR, unless instructed otherwise before I submit it, `cfg_scale` (mine) will live next to `guidance_scale` (MusicGen's). Idk how to resolve this competition, give that .generate() does not seem ready to use the 2x batch trick yet.",
"@Vermeille Adding more (and partially redundant) parameterization is highly undesirable, and we'd want to favor the more general case (yours). You also have the additional requirement of renormalizing the logits before applying your logits processor. Fortunately, we haven't officially released a `transformers` version with `MusicGen`, so we still have some wiggle room!\r\n\r\n Let's try to fit everything together -- here's my suggestion:\r\n- your logits processor uses the same parameter, `guidance_scale`, and it's triggered by its presence\r\n- EDIT: this is not needed ~your logits processor is added after the normalization one (after [this if](https://github.com/huggingface/transformers/blob/cd4584e3c809bb9e1392ccd3fe38b40daba5519a/src/transformers/generation/utils.py#L948)), and the normalization step is now also triggered when `guidance_scale` is non-`None`~\r\n- `ClassifierFreeGuidanceLogitsProcessor` (`MusicGen`'s) is removed from the function that prepares the logits processors, and we modify [MusicGen's generation function](https://github.com/huggingface/transformers/blob/cd4584e3c809bb9e1392ccd3fe38b40daba5519a/src/transformers/models/musicgen/modeling_musicgen.py#L1184) to handle its special processor: if `guidance_scale` is present when we generate with `MusicGen`, we pop it and manually add its CFG processor. I can take care of this part if you don't feel comfortable touching `MusicGen` :) \r\n\r\nThis way the two strategies can coexist, share the argument, and not clash ๐ค ",
"Great! Thank you for the walkthrough.\r\n\r\nOn it.",
"Wait @gante, integrating it after the LogitNormalization is not something we want: all the prior processing (temperature, top_p, etc), will be used only on the conditional branch and not the unconditional, and will be executed _before_ computing the CFG logits. To be fair, we haven't tested this transformation order, but being asymmetrical like this scares me.\r\n\r\nAnd this is is even invalid. Top-k/p may not even select the same tokens in both branches, so that will misbehave.\r\n\r\nI'm afraid I can't do that. CFG has to happen as one of the first logitprocessor",
"@Vermeille looking at your code example above, I didn't notice it already had normalization inside the processor. My bad -- feel free to add it as the 1st one :) \r\n\r\n(will edit my comment above accordingly, for clarity)",
"So this is the code I got to get it working. It is just a hack but if you want to playwith it just use this code \r\n```python3\r\nfrom transformers import LogitsWarper\r\nimport torch\r\nfrom torch.nn import functional as F\r\n\r\ndevice = 'cpu'\r\nif torch.has_cuda:\r\n device = 'cuda'\r\n\r\nclass CFGLogits(LogitsWarper):\r\n\r\n def __init__(self, cfg, inputs, model, verbose=True):\r\n self.cfg = cfg\r\n self.inputs = inputs\r\n self.model = model\r\n self.out = None\r\n self.verbose = verbose\r\n\r\n def __call__(self, input_ids, scores):\r\n if self.cfg == 1:\r\n return F.log_softmax(scores, dim=-1)\r\n scores = F.log_softmax(scores, dim=-1)\r\n if self.out is None:\r\n self.out = self.model(self.inputs.to(device), use_cache=True)\r\n else:\r\n self.out = self.model(input_ids[:, -1:],\r\n use_cache=True,\r\n past_key_values=self.out.past_key_values)\r\n unconditional_logits = F.log_softmax(self.out.logits[0][-1:], dim=-1)\r\n out = self.cfg * (scores - unconditional_logits) + unconditional_logits\r\n out = F.log_softmax(out, dim=-1)\r\n return 0.7 * out + 0.3 * scores\r\n \r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\r\nfrom transformers import LogitsProcessorList, TemperatureLogitsWarper, TopPLogitsWarper\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"EleutherAI/pythia-160m\")\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\"EleutherAI/pythia-160m\")\r\n\r\nprompt = \"Salve, dispiculi.\"\r\ninputs = tokenizer(prompt, return_tensors='pt')\r\nmodel.to(device)\r\noutputs = model.generate(\r\n input_ids=inputs['input_ids'].to(device),\r\n attention_mask=inputs['attention_mask'].to(device),\r\n max_new_tokens=125,\r\n logits_processor=LogitsProcessorList([\r\n # inputs_cfg usually is the last token of the prompt but there are\r\n # possibilities of negative prompting that are explored in the paper\r\n CFGLogits(3, inputs['input_ids'], model),\r\n TemperatureLogitsWarper(0.8),\r\n TopPLogitsWarper(0.95),\r\n ]),\r\n do_sample=True,\r\n)\r\n\r\nprint(tokenizer.decode(outputs[0]))\r\n```\r\nThis worked on my end",
"@grantCelley 's code works for me.\r\n\r\n## With CFG (pythia 160m)\r\n\r\n\r\n\r\n## Without CFG\r\n\r\n",
"@grantCelley @chris-aeviator \r\nThe line `CFGLogits(3, inputs['input_ids'], model),` should really be `CFGLogits(3, inputs['input_ids'][:, -1:], model),`",
"thanks for pointing it out, my 30 was a typo, but your prev. code doesnt seem to mention the [:, -1:] ?!",
"@chris-aeviator notice how it uses `input_cfg`:\r\n\r\n```python\r\n # inputs_cfg usually is the last token of the prompt but there are\r\n # possibilities of negative prompting that are explored in the paper\r\n CFGLogits(cfg, inputs_cfg, model),\r\n```"
] | 1,687 | 1,693 | 1,691 |
CONTRIBUTOR
| null |
EDIT: ===========================
As I see many people copy pasting this initial code that was meant to be a basis for discussion, here is a cleaner version (yet not perfect! We're still doing improvement rounds with the huggingface team to improve it! Check the state of the PR until it's not merged! https://github.com/huggingface/transformers/pull/24654 ).
```python
from transformers import (GPT2Tokenizer, AutoModelForCausalLM,
GPTNeoXForCausalLM, AutoTokenizer)
import numpy as np
import torch
from transformers import (LogitsProcessor, LogitsProcessorList,
MinLengthLogitsProcessor, TemperatureLogitsWarper,
TopKLogitsWarper, TopPLogitsWarper,
TypicalLogitsWarper)
from transformers.generation import LogitNormalization
import torch.nn.functional as F
class CFGLogits(LogitsProcessor):
r"""Logits processor for Classifier-Free Guidance (CFG). The processors
computes a weighted average across scores from prompt conditional and prompt unconditional (or negative) logits,
parameterized by the `guidance_scale`. The unconditional scores are computed internally by prompting `model` with
the `uncond` branch. Finally, according to CFG Rescale, the reweighted logits are interpolated back with weight
`rescale_factor` the conditional ones to smooth the effect and increase output quality.
See [the paper](https://arxiv.org/abs/2306.17806) for more information.
Args:
guidance_scale (float):
The guidance scale for classifier free guidance (CFG). CFG is enabled by setting `guidance_scale > 1`.
Higher guidance scale encourages the model to generate samples that are more closely linked to the input
prompt, usually at the expense of poorer quality.
uncond (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary for the unconditional branch.
model:
The LM computing the unconditional scores. Supposedly the same as the one computing the conditional scores.
Both models must use the same tokenizer.
"""
def __init__(self, guidance_scale, uncond, model):
self.guidance_scale = guidance_scale
self.uncond = uncond
self.model = model
self.out = None
self.rescale_factor = rescale_factor
def __call__(self, input_ids, scores):
scores = F.log_softmax(scores, dim=-1)
if self.guidance_scale == 1:
return scores
if self.out is None:
self.out = self.model(self.uncond, use_cache=True)
else:
self.out = self.model(
input_ids[:, -1:],
use_cache=True,
past_key_values=self.out.past_key_values,
)
unconditional_logits = F.log_softmax(self.out.logits[0][-1:], dim=-1)
out = self.guidance_scale * (scores - unconditional_logits) + unconditional_logits
return out
# paper usage: (copying and editing @grantCelley 's answer)
from transformers import AutoTokenizer, AutoModelForCausalLM
from transformers import LogitsProcessorList, TemperatureLogitsWarper, TopPLogitsWarper
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/pythia-160m")
model = AutoModelForCausalLM.from_pretrained("EleutherAI/pythia-160m")
prompt = tokenizer("Today a dragon flew over Paris, France,", return_tensors='pt')
# either provide a negative prompt:
neg_prompt = tokenizer("A sad event happened,", return_tensors='pt')['input_ids']
# or don't:
# neg_prompt = prompt['input_ids'][:, -1:]
device='cuda:0'
model.to(device)
outputs = model.generate(
input_ids=prompt['input_ids'].to(device),
attention_mask=prompt['attention_mask'].to(device),
max_new_tokens=125,
logits_processor=LogitsProcessorList([
# inputs_cfg usually is the last token of the prompt but there are
# possibilities of negative prompting that are explored in the paper
CFGLogits(1.5, neg_prompt.to(device), model),
TemperatureLogitsWarper(0.8),
TopPLogitsWarper(0.95),
]),
do_sample=True,
)
print(tokenizer.decode(outputs[0]))
```
===============================
### Feature request
Hello!
I wish to contribute CFG sampling. I'm working with EleutherAI and @StellaAthena and will have a paper about it by Friday. CFG brings non trivial improvements on many standard benchmarks. It contrast the logits for the next token $P(w_t|w_{..t}, prompt)$ to that of the input deprived of the prompt $P(w_t|w_{..t})$, by defining
$$
\log P_{\text{cfg}}(w|w_{..t}, prompt) = \log P(w|w_{..t}) + \text{cfg} * (\log P(w|w_{..t}, prompt) - \log P(w|w_{..t})
$$
And then we can blend $\log P_{\text{cfg}}$ with $\log P(w|w_{..t}, prompt)$ to smoothen that distribution a bit, but it's optional.
### Motivation
My current implementation is:
```python
class CFGLogits(LogitsWarper):
def __init__(self, cfg, inputs, model, verbose=True):
self.cfg = cfg
self.inputs = inputs
self.model = model
self.out = None
self.verbose = verbose
def __call__(self, input_ids, scores):
if self.cfg == 1:
return F.log_softmax(scores, dim=-1)
scores = F.log_softmax(scores, dim=-1)
if self.out is None:
self.out = self.model(self.inputs.to(device), use_cache=True)
else:
self.out = self.model(input_ids[:, -1:],
use_cache=True,
past_key_values=self.out.past_key_values)
unconditional_logits = F.log_softmax(self.out.logits[0][-1:], dim=-1)
out = self.cfg * (scores - unconditional_logits) + unconditional_logits
out = F.log_softmax(out, dim=-1)
return 0.7 * out + 0.3 * scores
# usage:
outputs = model.generate(
input_ids=inputs['input_ids'].to(device),
attention_mask=inputs['attention_mask'].to(device),
max_new_tokens=l,
logits_processor=LogitsProcessorList([
# inputs_cfg usually is the last token of the prompt but there are
# possibilities of negative prompting that are explored in the paper
CFGLogits(cfg, inputs_cfg, model),
TemperatureLogitsWarper(0.8),
TopPLogitsWarper(0.95),
]),
do_sample=True,
)
```
I am not familiar enough with the design guidelines of HF to know if this implementation as a LogitsWarper is satisfactory.
just a few figures supporting the claims:





### Your contribution
I can contribute the code but I need to be guided as I don't know the exact design guidelines and overall architecture of HF.
Thank you for your time!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24536/reactions",
"total_count": 11,
"+1": 8,
"-1": 0,
"laugh": 3,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24536/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24535
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24535/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24535/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24535/events
|
https://github.com/huggingface/transformers/pull/24535
| 1,778,001,304 |
PR_kwDOCUB6oc5UF3me
| 24,535 |
Finishing tidying keys to ignore on load
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24535). All of your documentation changes will be reflected on that endpoint."
] | 1,687 | 1,687 | 1,687 |
COLLABORATOR
| null |
# What does this PR do?
A couple of tests are failing on main due to #24505 (and PRs merged between the start of the work on this branch and its merge). This should fix all of them.
Note: will merge as soon as CI is green to make main green again.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24535/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24535/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24535",
"html_url": "https://github.com/huggingface/transformers/pull/24535",
"diff_url": "https://github.com/huggingface/transformers/pull/24535.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24535.patch",
"merged_at": 1687916115000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24534
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24534/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24534/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24534/events
|
https://github.com/huggingface/transformers/issues/24534
| 1,777,918,796 |
I_kwDOCUB6oc5p-ONM
| 24,534 |
`AutoModelForCausalLM.from_config` doesn't handle `revision="79ec93c"` correctly
|
{
"login": "ibeltagy",
"id": 2287797,
"node_id": "MDQ6VXNlcjIyODc3OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/2287797?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ibeltagy",
"html_url": "https://github.com/ibeltagy",
"followers_url": "https://api.github.com/users/ibeltagy/followers",
"following_url": "https://api.github.com/users/ibeltagy/following{/other_user}",
"gists_url": "https://api.github.com/users/ibeltagy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ibeltagy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ibeltagy/subscriptions",
"organizations_url": "https://api.github.com/users/ibeltagy/orgs",
"repos_url": "https://api.github.com/users/ibeltagy/repos",
"events_url": "https://api.github.com/users/ibeltagy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ibeltagy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"You should pass `code_revision` and not `revision` in your second call: you are loading the model code for that revision, not the weights.",
"Works. Thank you."
] | 1,687 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
### System Info
To reproduce:
```
from transformers import AutoModelForCausalLM, AutoConfig
config = AutoConfig.from_pretrained("mosaicml/mpt-7b", trust_remote_code=True, revision="79ec93c")
config.n_layers = 0
config.d_model = 16
config.n_heads = 4
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True, revision = "79ec93c")
```
and the error is:
```
In [21]: model = AutoModelForCausalLM.from_config(config, trust_remote_code=True, revision = "79ec93c")
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[21], line 1
----> 1 model = AutoModelForCausalLM.from_config(config, trust_remote_code=True, revision = "79ec93c")
File /opt/conda/envs/torch2.0.1/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:422, in _BaseAutoModelClass.from_config(cls, config, *
*kwargs)
420 model_class = get_class_from_dynamic_module(class_ref, repo_id, **kwargs)
421 _ = kwargs.pop("code_revision", None)
--> 422 return model_class._from_config(config, **kwargs)
423 elif type(config) in cls._model_mapping.keys():
424 model_class = _get_model_class(config, cls._model_mapping)
File /opt/conda/envs/torch2.0.1/lib/python3.10/site-packages/transformers/modeling_utils.py:1143, in PreTrainedModel._from_config(cls, config, **kwargs)
1141 model = cls(config, **kwargs)
1142 else:
-> 1143 model = cls(config, **kwargs)
1145 # restore default dtype if it was modified
1146 if dtype_orig is not None:
TypeError: MPTForCausalLM.__init__() got an unexpected keyword argument 'revision'
```
### Who can help?
Probably @sgugger
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Provided in the issue message
### Expected behavior
Loading the model not crashing
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24534/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24534/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24533
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24533/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24533/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24533/events
|
https://github.com/huggingface/transformers/issues/24533
| 1,777,746,507 |
I_kwDOCUB6oc5p9kJL
| 24,533 |
[BUG] Protobuf not being correctly installed
|
{
"login": "yinweisu",
"id": 21029745,
"node_id": "MDQ6VXNlcjIxMDI5NzQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/21029745?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yinweisu",
"html_url": "https://github.com/yinweisu",
"followers_url": "https://api.github.com/users/yinweisu/followers",
"following_url": "https://api.github.com/users/yinweisu/following{/other_user}",
"gists_url": "https://api.github.com/users/yinweisu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yinweisu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yinweisu/subscriptions",
"organizations_url": "https://api.github.com/users/yinweisu/orgs",
"repos_url": "https://api.github.com/users/yinweisu/repos",
"events_url": "https://api.github.com/users/yinweisu/events{/privacy}",
"received_events_url": "https://api.github.com/users/yinweisu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Transformers is only compatible with protobuf<4.0 because packages we depend on (sentencepiece if I recall correctly) do not support protobuf 4 yet. There is little we can do on our side until they support it.\r\n\r\ncc @ydshieh in case I said something wrong.",
"Hi @yinweisu \r\n\r\nYou can uninstall `protobuf` and reinstall it to get `3.20.3`.\r\n\r\nRegarding protobuf 4, unfortunately, we get errors (see below) for sentencepiece based tokenizers. I am also told that `google/sentencepiece` doesn't support protobuf 4 yet, but from its GitHub page, I can't see this information. I plan to open an issue to ask questions there.\r\n\r\n\r\n```bash\r\nIf this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.\r\nIf you cannot immediately regenerate your protos, some other possible workarounds are:\r\n 1. Downgrade the protobuf package to 3.20.x or lower.\r\n 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).\r\n\r\nMore information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates\r\n```",
"@yinweisu \r\n\r\nMaybe you want/could/love to give [this issue](https://github.com/google/sentencepiece/issues/889) a ๐ ๐ค ๐ ",
"Thanks! I know you can reinstall it. The issue is that we are an open source project, which depends on transformers and other libraries. Other libraries will install newer version of protobuf, which doesn't work with transformers. This means users consume our package will run into protobuf errors. Ideally, transformers should correctly pin protobuf and install the correct version while doing `pip install`. The fact is that, transformers does pin it in the setup.py, though it's not taking effect",
"> Transformers is only compatible with protobuf<4.0 because packages we depend on (sentencepiece if I recall correctly) do not support protobuf 4 yet. There is little we can do on our side until they support it.\r\n> \r\n> cc @ydshieh in case I said something wrong.\r\n\r\n@sgugger I understand it doesn't work with protobuf>4.0. But transformers should correctly install protobuf < 4.0 as its dependency, which is not what's happening right now",
"@yinweisu \r\n\r\nI probably need to check what happens if a higher version is already installed when a user installs `transformers`.\r\n\r\nCould you show us your way of installing `transformers`? ๐ Thanks.",
"It will just use the higher version. We've got this problem in our own package installation.\r\n\r\nYou should be easily reproduce it via\r\n```bash\r\npip install -U protobuf\r\npip install transformers\r\npip freeze | grep protobuf\r\n```",
"@yinweisu After looking `setup.py`, at the end in that file, the `install_requires` only contains a few libraries:\r\n\r\nhttps://github.com/huggingface/transformers/blob/66954ea25e342fd451c26ec1c295da0b8692086b/setup.py#L415-L426\r\n\r\nand `protobuf` is not in this list. It's why `protobuf` is not checked and updated in the case you mentioned.\r\n\r\nFrom README:\r\n\r\n> Then, you will need to install at least one of Flax, PyTorch or TensorFlow.\r\n\r\nI believe `install_requires` is set to be minimal as it is in current version. But @sgugger knows better than me regarding this and may have some comments.\r\n\r\n",
"protobuf is not a core dependency of Transformers, it is only installed through other packages (most likely sentencepiece). I don't think any of the packages here have a dependency on protobuf but I may be msitaken.",
"https://github.com/huggingface/transformers/blob/main/setup.py#L145\r\nProtobuf is listed here. So if the library requires a specific version of dependency to work, it should be installed along, right? ",
"This list only contains soft dependencies. This one is used when you do `pip install transformers[sentencepiece]` for instance.",
"So I only install transformers, but I can still run into this problem if protobuf is either already presented in my env or being installed along with other packages. If transformers requires a specific version of protobuf to function, it should pin that version and install it.",
"As mentioned in \r\n\r\n> You should install ๐ค Transformers in a (new) virtual environment.\r\n\r\nAnd you can simply install as `pip install transformers[dev]` or something smaller `pip install transformers[\"dev-torch\"]`.",
"@ydshieh I really don't want to argue with you and I appreciate your fast response, but imagine such an use case:\r\n\r\nOur package AutoGluon, depends on `transformers` and package `foo`, both of which are in our setup.py.\r\npackage `foo` has `protobuf` in its dependency as `protobuf>=3.15.3` let's say while transformer didn't pin `protobuf`\r\n\r\nWhen user creates a new virtual environment and do a pip install autogluon, it will install `foo` and `transformers,` which will then install the latest `protobuf` because the latest `protobuf` satisfy `protobuf>=3.15.3`\r\n\r\nYou see the problem here? It's not about if it's a fresh env or not.\r\n\r\nThe only way to solve this as AutoGluon for now is that we had to manually pin `protobuf` by looking at both requirements from `transformers `and `foo` and figure out what's the version that satisfy both even if its not a direct dependency for us. This is not maintainable as we need to do this everytime `foo` or `transformers` updated their dependency on `protobuf`. Then imagine what if there are multiple packages in our dependency that requires `protobuf`. If `transformer` pin `protobuf`, pip will auto-resolve the version",
"> if its not a direct dependency for us.\r\n\r\nIt's not a direct dependency for `transformers` neither. It's only for the usage of the tokenizers based on `sentencepiece`.\r\n\r\nIt's not practical for us to put all packages in the `setup.py` as hard dependency.\r\n\r\n> imagine what if there are multiple packages in our dependency that requires protobuf. \r\n\r\nLet's not overthink and just make your package `AutoGluon` work for now. Maintenance is not an easy task, and if there has something to be updated in the future, that's it.",
"@yinweisu \r\n\r\nFYI: after #24599, we no longer pin `protobuf` in `transformers`."
] | 1,687 | 1,688 | 1,688 |
NONE
| null |
### System Info
`transformers==4.30.2`
Protobuf not being installed along with transformers even when it's specified in `setup.py`.
This can be an issue if the environment already installs latest version of protobuf, which is not compatible with tranformers
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
pip install transformers -U
pip freeze | grep protobuf
### Expected behavior
protobuf with correct pinned version should be installed
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24533/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24533/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24532
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24532/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24532/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24532/events
|
https://github.com/huggingface/transformers/pull/24532
| 1,777,586,658 |
PR_kwDOCUB6oc5UEciO
| 24,532 |
Allow backbones not in backbones_supported - Maskformer Mask2Former
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @amyeroberts, thanks for this feature.\r\nOnce Mask2FormerModel is instantiated using e.g. a FocalNetConfig or a SwinConfig as its backbone_config, is it possible to load pre-trained weights from one of the corresponding image classification models (e.g. FocalNetForImageClassification or SwinForImageClassification)?",
"Hi @matteot11 - not yet! But it's in the works: #28214"
] | 1,687 | 1,704 | 1,687 |
COLLABORATOR
| null |
# What does this PR do?
Updates configuration creation to allow backbone configs to be pased in which aren't listed in `backbones_supported` - downgrading the exception raised to a warning.
Tested with:
```python
from transformers import (
Mask2FormerConfig,
MaskFormerConfig,
Mask2FormerModel,
MaskFormerModel,
FocalNetConfig
)
# Test with a backbone not officially supported
backbone_config = FocalNetConfig(out_indices=(-2, -1))
maskformer_config = MaskFormerConfig(backbone_config=backbone_config)
model = MaskFormerModel(maskformer_config)
mask2former_config = Mask2FormerConfig(backbone_config=backbone_config)
model = Mask2FormerModel(mask2former_config)
```
Fixes #24244
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24532/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24532/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24532",
"html_url": "https://github.com/huggingface/transformers/pull/24532",
"diff_url": "https://github.com/huggingface/transformers/pull/24532.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24532.patch",
"merged_at": 1687894476000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24531
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24531/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24531/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24531/events
|
https://github.com/huggingface/transformers/issues/24531
| 1,777,516,560 |
I_kwDOCUB6oc5p8sAQ
| 24,531 |
Document QA Pipeline Suddenly Down?
|
{
"login": "FFFiend",
"id": 96851409,
"node_id": "U_kgDOBcXV0Q",
"avatar_url": "https://avatars.githubusercontent.com/u/96851409?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FFFiend",
"html_url": "https://github.com/FFFiend",
"followers_url": "https://api.github.com/users/FFFiend/followers",
"following_url": "https://api.github.com/users/FFFiend/following{/other_user}",
"gists_url": "https://api.github.com/users/FFFiend/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FFFiend/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FFFiend/subscriptions",
"organizations_url": "https://api.github.com/users/FFFiend/orgs",
"repos_url": "https://api.github.com/users/FFFiend/repos",
"events_url": "https://api.github.com/users/FFFiend/events{/privacy}",
"received_events_url": "https://api.github.com/users/FFFiend/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Please follow the issue template and paste the result of `transformers-cli env`. It looks like you don't have PyTorch installed in your environment but it's hard to be sure without that. The document QA pipeline requires torch, it doesn't have an implementation in TensorFlow.",
"Resolved!! Thank you ๐ "
] | 1,687 | 1,687 | 1,687 |
NONE
| null |
### System Info
Transformers 4.29.2, MacOS, Python 3.9.6 64-bit,
I just have this piece of code:
```
query_pipeline = pipeline(
"document-question-answering",
model="impira/layoutlm-document-qa"
)
```
and on this Python version it gives me:
```
RuntimeError: Failed to import transformers.models.layoutlm.modeling_tf_layoutlm because of the
following error (look up to see its traceback):
No module named 'keras.engine'
```
when it was working perfectly yesterday.
On Version: 4.30.2, MacOS, Python 3.10.12 64-bit, I get a different error somehow
```
File "/opt/homebrew/lib/python3.10/site-packages/transformers/pipelines/__init__.py", line 988, in pipeline
return pipeline_class(model=model, framework=framework, task=task, **kwargs)
File "/opt/homebrew/lib/python3.10/site-packages/transformers/pipelines/document_question_answering.py", line 145, in __init__
self.check_model_type(MODEL_FOR_DOCUMENT_QUESTION_ANSWERING_MAPPING)
NameError: name 'MODEL_FOR_DOCUMENT_QUESTION_ANSWERING_MAPPING' is not defined
```
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Steps to reproduce:
Paste the code snippet and run the Python script.
### Expected behavior
I expect the pipeline to work properly as intended.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24531/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24531/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24530
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24530/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24530/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24530/events
|
https://github.com/huggingface/transformers/pull/24530
| 1,777,513,182 |
PR_kwDOCUB6oc5UEMAZ
| 24,530 |
Fix Typo
|
{
"login": "tony9402",
"id": 30228292,
"node_id": "MDQ6VXNlcjMwMjI4Mjky",
"avatar_url": "https://avatars.githubusercontent.com/u/30228292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tony9402",
"html_url": "https://github.com/tony9402",
"followers_url": "https://api.github.com/users/tony9402/followers",
"following_url": "https://api.github.com/users/tony9402/following{/other_user}",
"gists_url": "https://api.github.com/users/tony9402/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tony9402/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tony9402/subscriptions",
"organizations_url": "https://api.github.com/users/tony9402/orgs",
"repos_url": "https://api.github.com/users/tony9402/repos",
"events_url": "https://api.github.com/users/tony9402/events{/privacy}",
"received_events_url": "https://api.github.com/users/tony9402/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,687 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixed wrong annotation
- `(seq_len, batch, embed_dim)` -> `(batch, seq_len, embed_dim)`
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24530/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24530/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24530",
"html_url": "https://github.com/huggingface/transformers/pull/24530",
"diff_url": "https://github.com/huggingface/transformers/pull/24530.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24530.patch",
"merged_at": 1687894694000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24529
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24529/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24529/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24529/events
|
https://github.com/huggingface/transformers/pull/24529
| 1,777,355,502 |
PR_kwDOCUB6oc5UDpYK
| 24,529 |
Fixed OwlViTModel inplace operations
|
{
"login": "pasqualedem",
"id": 45567509,
"node_id": "MDQ6VXNlcjQ1NTY3NTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/45567509?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pasqualedem",
"html_url": "https://github.com/pasqualedem",
"followers_url": "https://api.github.com/users/pasqualedem/followers",
"following_url": "https://api.github.com/users/pasqualedem/following{/other_user}",
"gists_url": "https://api.github.com/users/pasqualedem/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pasqualedem/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pasqualedem/subscriptions",
"organizations_url": "https://api.github.com/users/pasqualedem/orgs",
"repos_url": "https://api.github.com/users/pasqualedem/repos",
"events_url": "https://api.github.com/users/pasqualedem/events{/privacy}",
"received_events_url": "https://api.github.com/users/pasqualedem/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@pasqualedem Could you apply the commit suggestions, then we can merge ๐ Thanks.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24529). All of your documentation changes will be reflected on that endpoint."
] | 1,687 | 1,688 | 1,688 |
CONTRIBUTOR
| null |
# What does this PR do?
This PR replaces the "/=" operator in OwlViTModel which causes an error in the backward pass with the non-inplace version.
Fixes #24525
#24525
## Who can review?
@ydshieh
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24529/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24529/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24529",
"html_url": "https://github.com/huggingface/transformers/pull/24529",
"diff_url": "https://github.com/huggingface/transformers/pull/24529.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24529.patch",
"merged_at": 1688026647000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24528
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24528/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24528/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24528/events
|
https://github.com/huggingface/transformers/pull/24528
| 1,777,288,687 |
PR_kwDOCUB6oc5UDau2
| 24,528 |
Add `finetuned_from` property in the autogenerated model card
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"What did you have in mind exactly? The only text added is \r\n```\r\n\"This model is a fine-tuned version of f\" [{self.finetuned_from}](https://huggingface.co/{self.finetuned_from}) on \"\r\n```\r\nwhich doesn't contain the `finetuned_from` tag.",
"@sgugger i would rename the variable everywhere",
"That's a breaking change in Transformers for cosmetic reasons only, so a big no no from my side.",
"I would go ahead with this PR as-is :fire: "
] | 1,687 | 1,688 | 1,688 |
COLLABORATOR
| null |
# What does this PR do?
We already extract the info but it wasn't properly put in the metadata. This PR adds it.
cc @julien-c @osanseviero for the exact name of the tag to use.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24528/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24528/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24528",
"html_url": "https://github.com/huggingface/transformers/pull/24528",
"diff_url": "https://github.com/huggingface/transformers/pull/24528.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24528.patch",
"merged_at": 1688507912000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24527
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24527/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24527/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24527/events
|
https://github.com/huggingface/transformers/pull/24527
| 1,777,240,276 |
PR_kwDOCUB6oc5UDQFM
| 24,527 |
Update `huggingface_hub` commit sha
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24527). All of your documentation changes will be reflected on that endpoint."
] | 1,687 | 1,687 | 1,687 |
COLLABORATOR
| null |
# What does this PR do?
To use `timeout=10.0` in [this commit](https://github.com/huggingface/huggingface_hub/commit/e4a419bf6bbaa95d14704cc781d3e81a49cef413).
The goal is to make sure the fix from infra team really works. But once verified, we can keep it as `10.0`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24527/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24527/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24527",
"html_url": "https://github.com/huggingface/transformers/pull/24527",
"diff_url": "https://github.com/huggingface/transformers/pull/24527.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24527.patch",
"merged_at": 1687880516000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24526
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24526/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24526/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24526/events
|
https://github.com/huggingface/transformers/pull/24526
| 1,777,235,668 |
PR_kwDOCUB6oc5UDPCb
| 24,526 |
Find module name in an OS-agnostic fashion
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"https://github.com/huggingface/transformers/blob/8a968e572c335d82c87dc3708fe0a9da79664d64/src/transformers/dynamic_module_utils.py#L274\r\nThis line should be change also~",
"Good point, it's added."
] | 1,687 | 1,687 | 1,687 |
COLLABORATOR
| null |
# What does this PR do?
Splitting on `os.path.sep` to get the filename is not fully-compatible on Windows as the OS recognizes a local path written as `folder/file.ext`, but `os.path.sep` is `\\`. Hopefully this way is better.
Fixes #24517
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24526/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24526/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24526",
"html_url": "https://github.com/huggingface/transformers/pull/24526",
"diff_url": "https://github.com/huggingface/transformers/pull/24526.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24526.patch",
"merged_at": 1687886479000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24525
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24525/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24525/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24525/events
|
https://github.com/huggingface/transformers/issues/24525
| 1,777,166,985 |
I_kwDOCUB6oc5p7WqJ
| 24,525 |
Backward pass error during OWL-ViT finetuning
|
{
"login": "pasqualedem",
"id": 45567509,
"node_id": "MDQ6VXNlcjQ1NTY3NTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/45567509?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pasqualedem",
"html_url": "https://github.com/pasqualedem",
"followers_url": "https://api.github.com/users/pasqualedem/followers",
"following_url": "https://api.github.com/users/pasqualedem/following{/other_user}",
"gists_url": "https://api.github.com/users/pasqualedem/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pasqualedem/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pasqualedem/subscriptions",
"organizations_url": "https://api.github.com/users/pasqualedem/orgs",
"repos_url": "https://api.github.com/users/pasqualedem/repos",
"events_url": "https://api.github.com/users/pasqualedem/events{/privacy}",
"received_events_url": "https://api.github.com/users/pasqualedem/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@pasqualedem \r\n\r\nWould you like to open a PR to fix it ? ๐ค ",
"Could you share your scripts for owlvit finetuning? Thanks @pasqualedem "
] | 1,687 | 1,702 | 1,688 |
CONTRIBUTOR
| null |
### System Info
Google Colab
### Who can help?
@ArthurZucker
@amyeroberts
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
https://colab.research.google.com/drive/1GfspAHLGTMmzfNShAVebzM04s32q1LKb?usp=sharing
### Expected behavior
I was trying to finetune Owl-ViT, and I came across this backward pass error. I investigated and found out that the error is the operator "/=" which does an inplace operation.
At lines:
- https://github.com/huggingface/transformers/blob/4e8929dcbb9040f54f52d898a600260f338ef54f/src/transformers/models/owlvit/modeling_owlvit.py#L1296
- https://github.com/huggingface/transformers/blob/4e8929dcbb9040f54f52d898a600260f338ef54f/src/transformers/models/owlvit/modeling_owlvit.py#L1297
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24525/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24525/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24524
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24524/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24524/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24524/events
|
https://github.com/huggingface/transformers/issues/24524
| 1,777,151,541 |
I_kwDOCUB6oc5p7S41
| 24,524 |
model.generate single CPU core bottleneck
|
{
"login": "dhcracchiolo",
"id": 47190143,
"node_id": "MDQ6VXNlcjQ3MTkwMTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/47190143?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhcracchiolo",
"html_url": "https://github.com/dhcracchiolo",
"followers_url": "https://api.github.com/users/dhcracchiolo/followers",
"following_url": "https://api.github.com/users/dhcracchiolo/following{/other_user}",
"gists_url": "https://api.github.com/users/dhcracchiolo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhcracchiolo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhcracchiolo/subscriptions",
"organizations_url": "https://api.github.com/users/dhcracchiolo/orgs",
"repos_url": "https://api.github.com/users/dhcracchiolo/repos",
"events_url": "https://api.github.com/users/dhcracchiolo/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhcracchiolo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @gante ",
"Hey @dhcracchiolo ๐ \r\n\r\nGood timing on your question, we are precisely exploring what we can do to speed up LLaMA within python + PyTorch + `.generate` API, in [this repo](https://github.com/fxmarty/accelerated-pytorch-transformers-generation). \r\n\r\nIn a nutshell, the model is CPU bound, as is most PyTorch code for models of this size. If you install the repo above, run `python run_llama.py --model huggingface/llama-7b --preallocate --profile`, and check the profiler results on TensorBoard, you'll see that the CPU bottleneck is in orchestrating the PyTorch execution on GPU. This is why you see high-speed inference packages to go straight into CUDA, to get rid of this overhead.\r\n\r\nFeel free to use the code from the repo above, it should get you ~50% speedup :) It doesn't have all the options that we have in `.generate`, though. If you have concrete suggestions on how to further speed it up, we're all ears ๐ ",
"> Hey @dhcracchiolo ๐\r\n> \r\n> Good timing on your question, we are precisely exploring what we can do to speed up LLaMA within python + PyTorch + `.generate` API, in [this repo](https://github.com/fxmarty/accelerated-pytorch-transformers-generation).\r\n> \r\n> In a nutshell, the model is CPU bound, as is most PyTorch code for models of this size. If you install the repo above, run `python run_llama.py --model huggingface/llama-7b --preallocate --profile`, and check the profiler results on TensorBoard, you'll see that the CPU bottleneck is in orchestrating the PyTorch execution on GPU. This is why you see high-speed inference packages to go straight into CUDA, to get rid of this overhead.\r\n> \r\n> Feel free to use the code from the repo above, it should get you ~50% speedup :) It doesn't have all the options that we have in `.generate`, though. If you have concrete suggestions on how to further speed it up, we're all ears ๐\r\n\r\nThanks for the response! Excellent! I'll check the repo out and report back. ๐ ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@gante Can i check if using other interfaces like `pipeline` circumvent this issue?",
"@calvintwr ofc you can, no guarantees it would be merged though ;) Depends on the benefits-costs tradeoff, as any contribution does.\r\n\r\n`pipeline` calls `generate` under the hood, so the bottleneck would be the same.",
"@gante I donโt mind exploring the solution. Can I ask if the code in question here?: https://github.com/huggingface/transformers/blob/1791ef8df647a38b4fcb96c14ddd83a43861d713/src/transformers/generation/utils.py#L1295 \r\n\r\nIf so, I skimmed, and is the reason because of extensive use of torch methods at many different places? Or is there a main culprit?",
"The main culprit is the CPU bottleneck when issuing instructions (i.e. the time in Python issuing GPU instructions is not optimized in many segments of the model forward pass). Check [this comment](https://github.com/huggingface/transformers/issues/24524#issuecomment-1611081237), it has instructions on how to dive deeper :) Note that most of the bottleneck is on the forward pass of the model itself.\r\n\r\nThis means that speedups for existing models can come mostly from two angles:\r\n1 - Spending less time on CPU preparing GPU instructions\r\n2 - Picking faster GPU operations that yield the same results\r\n\r\nWe rarely optimize our models, so there is room for some speed-up in our models (up to 10%, I'd say).",
"@gante I see that. 10% is not a lotโฆ\r\n\r\nWhatโs your view on offloading this portion to Cpp, and use SharedArray to transfer data between?\r\n\r\nWe can help, but just trying to get a sense of the extent of the task. But if this is achievable, this will bring transformer to the next level.",
"@calvintwr offloading to other programming languages is not within the scope of `transformers`, at least not for now :) `SharedArray` could be an option!\r\n\r\nIn `transformers`, we love performance upgrades! However, our biggest focus is on ease of use, so we don't want to introduce performance gains at the cost of complex dependencies or complex code ๐ค "
] | 1,687 | 1,692 | 1,691 |
NONE
| null |
### System Info
I'm running inference on a GPU EC2 instance using CUDA. After doing a little profiling I noticed the model.generate method was the clear bottleneck. Upon closer inspection running htop showed that during this method call only a single cpu core is used and is maxed out to 100%. I've made sure sure all of my weights, biases and activations are all on the gpu. Running nvidia-smi shows the proper amount of VRAM usage. My question is, is there a way to speed this method up using multiple CPU cores? I haven't dug deep into the method to see exactly what is causing the issue yet. Just curious if there's something obvious I'm missing.
(Basic sample code)
```
from transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig
import torch
tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-7b-hf")
model = LlamaForCausalLM.from_pretrained(
"decapoda-research/llama-7b-hf",
load_in_8bit=True,
device_map="auto",
)
PROMPT = f"""### Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
### Input: "What is the difference between a llama and a vicuna?"
### Response:"""
inputs = tokenizer(
PROMPT,
return_tensors="pt",
)
input_ids = inputs["input_ids"].cuda()
generation_config = GenerationConfig(
temperature=0.6,
top_p=0.95,
repetition_penalty=1.15,
)
generation_output = model.generate(
input_ids=input_ids,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=True,
max_new_tokens=128,
)
for s in generation_output.sequences:
print(tokenizer.decode(s))
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run this method with any model and profile the cpu cores to see it's only using a single core:
generation_output = model.generate(
input_ids=input_ids,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=True,
max_new_tokens=128,
)
### Expected behavior
Multiprocessing to help balance the load.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24524/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24524/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24523
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24523/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24523/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24523/events
|
https://github.com/huggingface/transformers/pull/24523
| 1,777,134,541 |
PR_kwDOCUB6oc5UC5UV
| 24,523 |
Falcon port
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Update: Slightly delayed because there are some breaking architecture changes between the different Falcon checkpoints - I'm merging the various layers and using config variables to switch between the behaviours.",
"_The documentation is not available anymore as the PR was closed or merged._",
"feel free to ping me for a review anytime! ",
"Hi @Rocketknight1 \r\nWould this PR allow to export falcon to onnx?\r\nAs today using the latest release (4.30.1):\r\n```\r\nTraceback (most recent call last):\r\n File \"hf2onnx.py\", line 99, in <module>\r\n model_kind, model_onnx_config = FeaturesManager.check_supported_model_or_raise(model, feature= feature)\r\n File \"site-packages/transformers/onnx/features.py\", line 728, in check_supported_model_or_raise\r\n model_features = FeaturesManager.get_supported_features_for_model_type(model_type, model_name=model_name)\r\n File \"site-packages/transformers/onnx/features.py\", line 575, in get_supported_features_for_model_type\r\n raise KeyError(\r\nKeyError: \"refinedwebmodel is not supported yet. Only ['albert', 'bart', 'beit', 'bert', 'big-bird', 'bigbird-pegasus', 'blenderbot', 'blenderbot-small', 'bloom', 'camembert', 'clip', 'codegen', 'convbert', 'convnext', 'data2vec-text', 'data2vec-vision', 'deberta', 'deberta-v2', 'deit', 'detr', 'distilbert', 'electra', 'flaubert', 'gpt2', 'gptj', 'gpt-neo', 'groupvit', 'ibert', 'imagegpt', 'layoutlm', 'layoutlmv3', 'levit', 'longt5', 'longformer', 'marian', 'mbart', 'mobilebert', 'mobilenet-v1', 'mobilenet-v2', 'mobilevit', 'mt5', 'm2m-100', 'owlvit', 'perceiver', 'poolformer', 'rembert', 'resnet', 'roberta', 'roformer', 'segformer', 'squeezebert', 'swin', 't5', 'vision-encoder-decoder', 'vit', 'whisper', 'xlm', 'xlm-roberta', 'yolos'] are supported. If you want to support refinedwebmodel please propose a PR or open up an issue.\"\r\n```\r\nbest",
"Hey all! The main modeling code should be ready for final review now. Thanks @ArthurZucker for the comprehensive review - it was really helpful! There's one bug left that's causing a failing test, but I think it's a one-line fix that I can track down tomorrow. This may also be the issue that's causing assisted generation to fail, but those tests are currently skipped.\r\n\r\nI also need to figure out porting the tokenizer, and then once this is merged I'll need to prepare the repos to transition over to the library code.\r\n\r\ncc @amyeroberts for core maintainer review!"
] | 1,687 | 1,689 | 1,689 |
MEMBER
| null |
This PR adds the Falcon model to the main library. It's still a work in progress, and integration tests / model checkpoints still need to be added!
TODO:
- [x] Migrate custom code checkpoints to the new architecture
- [x] Confirm tokenizer can be loaded correctly with `AutoTokenizer` for all checkpoints
- [x] Upload a ported 1B model.
- [x] Add integration tests for the 1B model
- [x] Add support for `output_attention`
- [x] Add support for `output_hidden_states`
- [x] Ensure all tests pass
- [x] Ensure any other issues addressed (see comments on Slack)
- [x] Address review comments
- [x] Ensure tokenizers are ported correctly (token_type_ids issue)
- [x] Upload library ports of all Falcon checkpoints and migrate/redirect to them
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24523/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 4,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24523/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24523",
"html_url": "https://github.com/huggingface/transformers/pull/24523",
"diff_url": "https://github.com/huggingface/transformers/pull/24523.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24523.patch",
"merged_at": 1689078992000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24522
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24522/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24522/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24522/events
|
https://github.com/huggingface/transformers/issues/24522
| 1,776,918,915 |
I_kwDOCUB6oc5p6aGD
| 24,522 |
LLamaTokenizer padding_side='left'
|
{
"login": "davidmrau",
"id": 20661461,
"node_id": "MDQ6VXNlcjIwNjYxNDYx",
"avatar_url": "https://avatars.githubusercontent.com/u/20661461?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidmrau",
"html_url": "https://github.com/davidmrau",
"followers_url": "https://api.github.com/users/davidmrau/followers",
"following_url": "https://api.github.com/users/davidmrau/following{/other_user}",
"gists_url": "https://api.github.com/users/davidmrau/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davidmrau/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidmrau/subscriptions",
"organizations_url": "https://api.github.com/users/davidmrau/orgs",
"repos_url": "https://api.github.com/users/davidmrau/repos",
"events_url": "https://api.github.com/users/davidmrau/events{/privacy}",
"received_events_url": "https://api.github.com/users/davidmrau/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Please ask questions like this on the [forums](https://discuss.huggingface.co/) as we keep issues for bugs and feature requests only.\r\n\r\nAll decoder models use padding on the left side since they can't properly generate the next token after a sentence if it ends with pad tokens. Llama is a decoder model, so follows the same rule."
] | 1,687 | 1,687 | 1,687 |
NONE
| null |
Trying to Fine-tune the LLama model I encountered that the LLama tokenizer adds padding tokens on the left side by default. I expect the padding to be added to the right side. Is this a bug or supposed to be like this?
https://github.com/huggingface/transformers/blob/6fe8d198e3e30b709a7d233240f273804b886dcc/src/transformers/models/llama/tokenization_llama_fast.py#L85C5-L85C5
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24522/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24522/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24521
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24521/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24521/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24521/events
|
https://github.com/huggingface/transformers/pull/24521
| 1,776,915,730 |
PR_kwDOCUB6oc5UCI6I
| 24,521 |
Fix LR scheduler based on bs from auto bs finder
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @mzamini92",
"> cc @mzamini92\r\n\r\nyup that works. ๐ ",
"@muellerzr I think there might be a problem with this PR, after building from source, the following snippet gives me a learning rate of 0 before the end of training:\r\n\r\n```python\r\nimport evaluate\r\nimport numpy as np\r\nfrom datasets import load_dataset\r\nfrom transformers import AutoTokenizer, AutoModelForSequenceClassification\r\nfrom transformers import TrainingArguments, Trainer\r\n\r\ndataset = load_dataset(\"yelp_review_full\", split=\"train[99%:]\")\r\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\")\r\n\r\n\r\ndef tokenize_function(examples):\r\n return tokenizer(examples[\"text\"], padding=\"max_length\", truncation=True)\r\n\r\n\r\ndef compute_metrics(eval_pred):\r\n logits, labels = eval_pred\r\n predictions = np.argmax(logits, axis=-1)\r\n return metric.compute(predictions=predictions, references=labels)\r\n\r\n\r\ntokenized_datasets = dataset.map(tokenize_function, batched=True).shuffle(seed=42)\r\ntokenized_datasets = tokenized_datasets.train_test_split(test_size=0.1)\r\n\r\nsmall_train_dataset = tokenized_datasets[\"train\"]\r\nsmall_eval_dataset = tokenized_datasets[\"test\"]\r\n\r\nmetric = evaluate.load(\"accuracy\")\r\n\r\ntraining_args = TrainingArguments(output_dir=\"test_trainer\", evaluation_strategy=\"epoch\", report_to=[], num_train_epochs=3, auto_find_batch_size=True,\r\n lr_scheduler_type=\"linear\", per_device_train_batch_size=1024, logging_strategy=\"steps\", logging_steps=5, fp16=True)\r\nmodel = AutoModelForSequenceClassification.from_pretrained(\"bert-base-cased\", num_labels=5)\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n train_dataset=small_train_dataset,\r\n eval_dataset=small_eval_dataset,\r\n compute_metrics=compute_metrics,\r\n)\r\n\r\ntrainer.train()\r\n```",
"@thomas-schillaci what version of Accelerate are you using? ",
"@muellerzr `0.20.3` using `accelerate launch` and the following config:\r\n\r\n```\r\ncompute_environment: LOCAL_MACHINE\r\ndistributed_type: MULTI_GPU\r\ndowncast_bf16: 'no'\r\ngpu_ids: all\r\nmachine_rank: 0\r\nmain_training_function: main\r\nmixed_precision: fp16\r\nnum_machines: 1\r\nnum_processes: 3\r\nrdzv_backend: static\r\nsame_network: true\r\ntpu_env: []\r\ntpu_use_cluster: false\r\ntpu_use_sudo: false\r\nuse_cpu: false\r\n```",
"Thank you so much @thomas-schillaci! https://github.com/huggingface/transformers/pull/24758 should fix it right up, I verified it was being decayed properly and in-turn, a new lr scheduler actually being made. You can toy with `pip install git+https://github.com/huggingface/transformers@fix-train-bs` to try :) "
] | 1,687 | 1,689 | 1,687 |
CONTRIBUTOR
| null |
# What does this PR do?
This PR presents an alternative to https://github.com/huggingface/transformers/pull/23038, since we can't actually modify the schedulers realistically in Accelerate, this sets the correct "total batch size" based on the new bs being used, which gets trickled down to the creation of the scheduler via max steps if applicable. We can't go further than this however, as modifying the scheduler further would allude to auto-gradient accumulation, which while good, should be done after this :)
(and might be as simple as just modifying `self.accelerator.gradient_accumulation_steps`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24521/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24521/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24521",
"html_url": "https://github.com/huggingface/transformers/pull/24521",
"diff_url": "https://github.com/huggingface/transformers/pull/24521.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24521.patch",
"merged_at": 1687886906000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24520
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24520/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24520/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24520/events
|
https://github.com/huggingface/transformers/pull/24520
| 1,776,892,363 |
PR_kwDOCUB6oc5UCDmL
| 24,520 |
set model to training mode before accelerate.prepare
|
{
"login": "sywangyi",
"id": 36058628,
"node_id": "MDQ6VXNlcjM2MDU4NjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/36058628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sywangyi",
"html_url": "https://github.com/sywangyi",
"followers_url": "https://api.github.com/users/sywangyi/followers",
"following_url": "https://api.github.com/users/sywangyi/following{/other_user}",
"gists_url": "https://api.github.com/users/sywangyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sywangyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sywangyi/subscriptions",
"organizations_url": "https://api.github.com/users/sywangyi/orgs",
"repos_url": "https://api.github.com/users/sywangyi/repos",
"events_url": "https://api.github.com/users/sywangyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sywangyi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,687 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
- trainer: @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24520/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24520/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24520",
"html_url": "https://github.com/huggingface/transformers/pull/24520",
"diff_url": "https://github.com/huggingface/transformers/pull/24520.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24520.patch",
"merged_at": 1687874978000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24519
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24519/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24519/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24519/events
|
https://github.com/huggingface/transformers/issues/24519
| 1,776,796,747 |
I_kwDOCUB6oc5p58RL
| 24,519 |
I have a question in the source code called modeling_llama.py
|
{
"login": "park1200656",
"id": 20066633,
"node_id": "MDQ6VXNlcjIwMDY2NjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/20066633?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/park1200656",
"html_url": "https://github.com/park1200656",
"followers_url": "https://api.github.com/users/park1200656/followers",
"following_url": "https://api.github.com/users/park1200656/following{/other_user}",
"gists_url": "https://api.github.com/users/park1200656/gists{/gist_id}",
"starred_url": "https://api.github.com/users/park1200656/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/park1200656/subscriptions",
"organizations_url": "https://api.github.com/users/park1200656/orgs",
"repos_url": "https://api.github.com/users/park1200656/repos",
"events_url": "https://api.github.com/users/park1200656/events{/privacy}",
"received_events_url": "https://api.github.com/users/park1200656/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey @park1200656 ๐ \r\n\r\nSome operations degrade the quality of the outputs if not performed at a certain minimum precision. The softmax in the attention layer and the variance accumulation in RMSNorm performed in FP32 are two examples of that :) Related read: [this issue](https://github.com/huggingface/transformers/pull/17437)\r\n\r\n_________________________________________\r\n\r\nFollowing our [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md), we reserve GitHub issues for bugs in the repository and/or feature requests. For any other matters, we'd like to invite you to use our [forum](https://discuss.huggingface.co/) ๐ค If this is your first issue with us, check [this guide](https://huggingface.co/course/chapter8/5?fw=pt).",
"I had the same question [yesterday](https://discuss.huggingface.co/t/why-models-llama-in-particular-upcasts-softmax-to-fp32/44787). Can we make it optional? At least softmax\r\n\r\nBF16 is good enough. And by \"good enough\" I mean it \"not crashes at long context at my laptop's 3080TI \" and \"return values are the same anyway, instability might be overstated\"\r\n\r\nExample. Making it optional:\r\n\r\n```diff\r\ndiff --git a/src/transformers/models/llama/modeling_llama.py b/src/transformers/models/llama/modeling_llama.py\r\nindex 24231c3f7..230e5333c 100755\r\n--- a/src/transformers/models/llama/modeling_llama.py\r\n+++ b/src/transformers/models/llama/modeling_llama.py\r\n@@ -228,8 +228,12 @@ class LlamaAttention(nn.Module):\r\n attn_weights, torch.tensor(torch.finfo(attn_weights.dtype).min, device=attn_weights.device)\r\n )\r\n \r\n- # upcast attention to fp32\r\n- attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype)\r\n+ # optionally upcast attention to fp32\r\n+ if self.config.use_attn_upcast:\r\n+ attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype)\r\n+ else:\r\n+ attn_weights = nn.functional.softmax(attn_weights, dim=-1).to(query_states.dtype)\r\n```\r\n\r\nTest script:\r\n```python\r\nfrom transformers import AutoModelForCausalLM\r\nimport torch\r\nimport sys\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\"./models/open_llama_3b/\", torch_dtype=torch.bfloat16).cuda()\r\nmodel.config.use_attn_upcast = \"--no-oom\" not in sys.argv\r\nprint(\"Predict that OOM will happen: \", model.config.use_attn_upcast)\r\n\r\ninput_ids = torch.arange(20)[None].cuda()\r\nprint(model(input_ids).logits.mean(-1))\r\n\r\ninput_ids = torch.arange(1000)[None].cuda()\r\nprint(model(input_ids).logits.mean())\r\n```\r\n\r\nWith upcast removed\r\n```console\r\n$ python demo_py.py --no-oom\r\n\r\nPredict that OOM will happen: False\r\ntensor([[-9.0000, -6.0938, -1.8281, -7.7812, -7.5000, -7.5000, -7.6250, -7.7500,\r\n -7.1250, -7.0000, -7.7188, -7.5625, -6.9688, -5.5312, -6.1562, -6.5312,\r\n -7.5938, -7.0000, -7.1875, -6.8750]], device='cuda:0',\r\n dtype=torch.bfloat16, grad_fn=<MeanBackward1>)\r\ntensor(-6.9062, device='cuda:0', dtype=torch.bfloat16, grad_fn=<MeanBackward0>)\r\n```\r\n\r\nWith upcast:\r\n```console\r\n$ python demo_py.py \r\n\r\nPredict that OOM will happen: True\r\ntensor([[-9.0000, -6.0938, -1.8281, -7.7812, -7.5000, -7.5000, -7.6250, -7.7500,\r\n -7.1250, -7.0000, -7.7188, -7.5625, -6.9688, -5.5312, -6.1562, -6.5312,\r\n -7.5938, -7.0000, -7.1875, -6.8750]], device='cuda:0',\r\n dtype=torch.bfloat16, grad_fn=<MeanBackward1>)\r\nTraceback (most recent call last):\r\n File \"/home/fella/src/llama/text-generation-webui/demo_py.py\", line 14, in <module>\r\n print(model(input_ids).logits.mean())\r\n ^^^^^^^^^^^^^^^^\r\n File \"/home/fella/src/sd/sd/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/fella/src/sd/sd/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py\", line 690, in forward\r\n outputs = self.model(\r\n ^^^^^^^^^^^\r\n File \"/home/fella/src/sd/sd/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/fella/src/sd/sd/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py\", line 580, in forward\r\n layer_outputs = decoder_layer(\r\n ^^^^^^^^^^^^^^\r\n File \"/home/fella/src/sd/sd/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/fella/src/sd/sd/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py\", line 295, in forward\r\n hidden_states, self_attn_weights, present_key_value = self.self_attn(\r\n ^^^^^^^^^^^^^^^\r\n File \"/home/fella/src/sd/sd/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/fella/src/sd/sd/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py\", line 232, in forward\r\n attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/fella/src/sd/sd/lib/python3.11/site-packages/torch/nn/functional.py\", line 1845, in softmax\r\n ret = input.softmax(dim, dtype=dtype)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\ntorch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 124.00 MiB (GPU 0; 15.74 GiB total capacity; 14.83 GiB already allocated; 134.38 MiB free; 15.35 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF\r\n```",
"@Maykeye we have other options to reduce the memory footprint at inference time -- have you tried playing with our [support for 4-bit inference](https://huggingface.co/docs/transformers/v4.30.0/en/perf_infer_gpu_one#bitsandbytes-integration-for-fp4-mixedprecision-inference)? On a 3080 TI you may be able to run the 7B LLaMA model this way :)",
"Yes and quantized models produce noticeably different results. ",
"In general, lowering the precision of these operations will have a more significant impact on downstream performance ([take it from the person that initially added the upcast at Meta](https://github.com/huggingface/transformers/pull/17437#issuecomment-1139723683)).\r\n\r\nSince we have other memory reduction strategies, we will not add the flag you're proposing. (Still, the code is open-source, feel free to fork `transformers` and keep your changes ๐ค )",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"> In general, lowering the precision of these operations will have a more significant impact on downstream performance ([take it from the person that initially added the upcast at Meta](https://github.com/huggingface/transformers/pull/17437#issuecomment-1139723683)).\r\n> \r\n> Since we have other memory reduction strategies, we will not add the flag you're proposing. (Still, the code is open-source, feel free to fork `transformers` and keep your changes ๐ค )\r\n\r\nI dont think this is true. I have experimented a lot with mistral and fuyu and removing/changing the fused softmax cast in both has very little to any impact compared to alternative memory saving approaches (in terms of acc/loss tracked).\r\n\r\nSeems like something that should be allowed but warned about for models."
] | 1,687 | 1,691 | 1,691 |
NONE
| null |
### System Info
@ArthurZucker @gante
path : "src/transformers/models/llama/modeling_llama.py"
Line 85 and 232 of this code contains float32 as a constant.
I think, it looks like a bug. Or is there another reason?
Thanks.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
class LlamaRMSNorm(nn.Module):
def __init__(self, hidden_size, eps=1e-6):
"""
LlamaRMSNorm is equivalent to T5LayerNorm
"""
super().__init__()
self.weight = nn.Parameter(torch.ones(hidden_size))
self.variance_epsilon = eps
def forward(self, hidden_states):
input_dtype = hidden_states.dtype
variance = hidden_states.to(torch.float32).pow(2).mean(-1, keepdim=True)
hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
return (self.weight * hidden_states).to(input_dtype)
======
class LlamaAttention(nn.Module):
"""Multi-headed attention from 'Attention Is All You Need' paper"""
def __init__(self, config: LlamaConfig):
super().__init__()
self.config = config
self.hidden_size = config.hidden_size
self.num_heads = config.num_attention_heads
self.head_dim = self.hidden_size // self.num_heads
self.max_position_embeddings = config.max_position_embeddings
if (self.head_dim * self.num_heads) != self.hidden_size:
raise ValueError(
f"hidden_size must be divisible by num_heads (got `hidden_size`: {self.hidden_size}"
f" and `num_heads`: {self.num_heads})."
)
self.q_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=False)
self.k_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=False)
self.v_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=False)
self.o_proj = nn.Linear(self.num_heads * self.head_dim, self.hidden_size, bias=False)
self.rotary_emb = LlamaRotaryEmbedding(self.head_dim, max_position_embeddings=self.max_position_embeddings)
def _shape(self, tensor: torch.Tensor, seq_len: int, bsz: int):
return tensor.view(bsz, seq_len, self.num_heads, self.head_dim).transpose(1, 2).contiguous()
def forward(
self,
hidden_states: torch.Tensor,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.LongTensor] = None,
past_key_value: Optional[Tuple[torch.Tensor]] = None,
output_attentions: bool = False,
use_cache: bool = False,
) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
bsz, q_len, _ = hidden_states.size()
query_states = self.q_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
key_states = self.k_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
value_states = self.v_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
kv_seq_len = key_states.shape[-2]
if past_key_value is not None:
kv_seq_len += past_key_value[0].shape[-2]
cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len)
query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids)
# [bsz, nh, t, hd]
if past_key_value is not None:
# reuse k, v, self_attention
key_states = torch.cat([past_key_value[0], key_states], dim=2)
value_states = torch.cat([past_key_value[1], value_states], dim=2)
past_key_value = (key_states, value_states) if use_cache else None
attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) / math.sqrt(self.head_dim)
if attn_weights.size() != (bsz, self.num_heads, q_len, kv_seq_len):
raise ValueError(
f"Attention weights should be of size {(bsz, self.num_heads, q_len, kv_seq_len)}, but is"
f" {attn_weights.size()}"
)
if attention_mask is not None:
if attention_mask.size() != (bsz, 1, q_len, kv_seq_len):
raise ValueError(
f"Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is {attention_mask.size()}"
)
attn_weights = attn_weights + attention_mask
attn_weights = torch.max(
attn_weights, torch.tensor(torch.finfo(attn_weights.dtype).min, device=attn_weights.device)
)
# upcast attention to fp32
attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype)
attn_output = torch.matmul(attn_weights, value_states)
if attn_output.size() != (bsz, self.num_heads, q_len, self.head_dim):
raise ValueError(
f"`attn_output` should be of size {(bsz, self.num_heads, q_len, self.head_dim)}, but is"
f" {attn_output.size()}"
)
attn_output = attn_output.transpose(1, 2)
attn_output = attn_output.reshape(bsz, q_len, self.hidden_size)
attn_output = self.o_proj(attn_output)
if not output_attentions:
attn_weights = None
return attn_output, attn_weights, past_key_value
### Expected behavior
may be float32 --> dtype
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24519/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24519/timeline
|
completed
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.