url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/23094
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23094/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23094/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23094/events
|
https://github.com/huggingface/transformers/pull/23094
| 1,691,553,781 |
PR_kwDOCUB6oc5PipK9
| 23,094 |
Bump flask from 2.0.3 to 2.3.2 in /examples/research_projects/decision_transformer
|
{
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
}
|
[
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
Bumps [flask](https://github.com/pallets/flask) from 2.0.3 to 2.3.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/pallets/flask/releases">flask's releases</a>.</em></p>
<blockquote>
<h2>2.3.2</h2>
<p>This is a security fix release for the 2.3.x release branch.</p>
<ul>
<li>Security advisory: <a href="https://github.com/pallets/flask/security/advisories/GHSA-m2qf-hxjv-5gpq">https://github.com/pallets/flask/security/advisories/GHSA-m2qf-hxjv-5gpq</a>, CVE-2023-30861</li>
<li>Changes: <a href="https://flask.palletsprojects.com/en/2.3.x/changes/#version-2-3-2">https://flask.palletsprojects.com/en/2.3.x/changes/#version-2-3-2</a></li>
<li>Milestone: <a href="https://github.com/pallets/flask/milestone/29?closed=1">https://github.com/pallets/flask/milestone/29?closed=1</a></li>
</ul>
<h2>2.3.1</h2>
<p>This is a fix release for the 2.3.x release branch.</p>
<ul>
<li>Changes: <a href="https://flask.palletsprojects.com/en/2.3.x/changes/#version-2-3-1">https://flask.palletsprojects.com/en/2.3.x/changes/#version-2-3-1</a></li>
<li>Milestone: <a href="https://github.com/pallets/flask/milestone/28?closed=1">https://github.com/pallets/flask/milestone/28?closed=1</a></li>
</ul>
<h2>2.3.0</h2>
<p>This is a feature release, which includes new features, removes previously deprecated code, and adds new deprecations. The 2.3.x branch is now the supported fix branch, the 2.2.x branch will become a tag marking the end of support for that branch. We encourage everyone to upgrade, and to use a tool such as <a href="https://pypi.org/project/pip-tools/">pip-tools</a> to pin all dependencies and control upgrades. Test with warnings treated as errors to be able to adapt to deprecation warnings early.</p>
<ul>
<li>Changes: <a href="https://flask.palletsprojects.com/en/2.3.x/changes/#version-2-3-0">https://flask.palletsprojects.com/en/2.3.x/changes/#version-2-3-0</a></li>
<li>Milestone: <a href="https://github.com/pallets/flask/milestone/24?closed=1">https://github.com/pallets/flask/milestone/24?closed=1</a></li>
</ul>
<h2>2.2.4</h2>
<p>This is a fix release for the 2.2.x release branch.</p>
<ul>
<li>Changes: <a href="https://flask.palletsprojects.com/en/2.2.x/changes/#version-2-2-4">https://flask.palletsprojects.com/en/2.2.x/changes/#version-2-2-4</a></li>
<li>Milestone: <a href="https://github.com/pallets/flask/milestone/27?closed=1">https://github.com/pallets/flask/milestone/27?closed=1</a></li>
</ul>
<h2>2.2.3</h2>
<p>This is a fix release for the 2.2.x release branch.</p>
<ul>
<li>Changes: <a href="https://flask.palletsprojects.com/en/2.2.x/changes/#version-2-2-3">https://flask.palletsprojects.com/en/2.2.x/changes/#version-2-2-3</a></li>
<li>Milestone: <a href="https://github.com/pallets/flask/milestone/26?closed=1">https://github.com/pallets/flask/milestone/26?closed=1</a></li>
</ul>
<h2>2.2.2</h2>
<p>This is a fix release for the <a href="https://github.com/pallets/flask/releases/tag/2.2.0">2.2.0</a> feature release.</p>
<ul>
<li>Changes: <a href="https://flask.palletsprojects.com/en/2.2.x/changes/#version-2-2-2">https://flask.palletsprojects.com/en/2.2.x/changes/#version-2-2-2</a></li>
<li>Milestone: <a href="https://github.com/pallets/flask/milestone/25?closed=1">https://github.com/pallets/flask/milestone/25?closed=1</a></li>
</ul>
<h2>2.2.1</h2>
<p>This is a fix release for the <a href="https://github.com/pallets/flask/releases/tag/2.2.0">2.2.0</a> feature release.</p>
<ul>
<li>Changes: <a href="https://flask.palletsprojects.com/en/2.2.x/changes/#version-2-2-1">https://flask.palletsprojects.com/en/2.2.x/changes/#version-2-2-1</a></li>
<li>Milestone: <a href="https://github.com/pallets/flask/milestone/23?closed=1">https://github.com/pallets/flask/milestone/23?closed=1</a></li>
</ul>
<h2>2.2.0</h2>
<p>This is a feature release, which includes new features and removes previously deprecated code. The 2.2.x branch is now the supported bug fix branch, the 2.1.x branch will become a tag marking the end of support for that branch. We encourage everyone to upgrade, and to use a tool such as <a href="https://pypi.org/project/pip-tools/">pip-tools</a> to pin all dependencies and control upgrades.</p>
<ul>
<li>Changes: <a href="https://flask.palletsprojects.com/en/2.2.x/changes/#version-2-2-0">https://flask.palletsprojects.com/en/2.2.x/changes/#version-2-2-0</a></li>
<li>Milestone: <a href="https://github.com/pallets/flask/milestone/19?closed=1">https://github.com/pallets/flask/milestone/19?closed=1</a></li>
</ul>
<h2>2.1.3</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/pallets/flask/blob/main/CHANGES.rst">flask's changelog</a>.</em></p>
<blockquote>
<h2>Version 2.3.2</h2>
<p>Released 2023-05-01</p>
<ul>
<li>Set <code>Vary: Cookie</code> header when the session is accessed, modified, or refreshed.</li>
<li>Update Werkzeug requirement to >=2.3.3 to apply recent bug fixes.</li>
</ul>
<h2>Version 2.3.1</h2>
<p>Released 2023-04-25</p>
<ul>
<li>Restore deprecated <code>from flask import Markup</code>. :issue:<code>5084</code></li>
</ul>
<h2>Version 2.3.0</h2>
<p>Released 2023-04-25</p>
<ul>
<li>
<p>Drop support for Python 3.7. :pr:<code>5072</code></p>
</li>
<li>
<p>Update minimum requirements to the latest versions: Werkzeug>=2.3.0, Jinja2>3.1.2,
itsdangerous>=2.1.2, click>=8.1.3.</p>
</li>
<li>
<p>Remove previously deprecated code. :pr:<code>4995</code></p>
<ul>
<li>The <code>push</code> and <code>pop</code> methods of the deprecated <code>_app_ctx_stack</code> and
<code>_request_ctx_stack</code> objects are removed. <code>top</code> still exists to give
extensions more time to update, but it will be removed.</li>
<li>The <code>FLASK_ENV</code> environment variable, <code>ENV</code> config key, and <code>app.env</code>
property are removed.</li>
<li>The <code>session_cookie_name</code>, <code>send_file_max_age_default</code>, <code>use_x_sendfile</code>,
<code>propagate_exceptions</code>, and <code>templates_auto_reload</code> properties on <code>app</code>
are removed.</li>
<li>The <code>JSON_AS_ASCII</code>, <code>JSON_SORT_KEYS</code>, <code>JSONIFY_MIMETYPE</code>, and
<code>JSONIFY_PRETTYPRINT_REGULAR</code> config keys are removed.</li>
<li>The <code>app.before_first_request</code> and <code>bp.before_app_first_request</code> decorators
are removed.</li>
<li><code>json_encoder</code> and <code>json_decoder</code> attributes on app and blueprint, and the
corresponding <code>json.JSONEncoder</code> and <code>JSONDecoder</code> classes, are removed.</li>
<li>The <code>json.htmlsafe_dumps</code> and <code>htmlsafe_dump</code> functions are removed.</li>
<li>Calling setup methods on blueprints after registration is an error instead of a
warning. :pr:<code>4997</code></li>
</ul>
</li>
<li>
<p>Importing <code>escape</code> and <code>Markup</code> from <code>flask</code> is deprecated. Import them
directly from <code>markupsafe</code> instead. :pr:<code>4996</code></p>
</li>
<li>
<p>The <code>app.got_first_request</code> property is deprecated. :pr:<code>4997</code></p>
</li>
<li>
<p>The <code>locked_cached_property</code> decorator is deprecated. Use a lock inside the
decorated function if locking is needed. :issue:<code>4993</code></p>
</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/pallets/flask/commit/f3b8f570545200c87465d18386f3fc9f2258307a"><code>f3b8f57</code></a> release version 2.3.2</li>
<li><a href="https://github.com/pallets/flask/commit/c990bba94ab9bc81adf2d33e83c9a9628a2098f2"><code>c990bba</code></a> update min test env</li>
<li><a href="https://github.com/pallets/flask/commit/adedb2a64ea7703369bc89021710b439ee79f8dc"><code>adedb2a</code></a> Merge pull request <a href="https://redirect.github.com/pallets/flask/issues/5101">#5101</a> from pallets/update-werkzeug</li>
<li><a href="https://github.com/pallets/flask/commit/e1aedecdc689cc9a79131851dbdabf6c3bc49c9e"><code>e1aedec</code></a> update werkzeug</li>
<li><a href="https://github.com/pallets/flask/commit/37badc3ce8b0665e3454547839196a676729309f"><code>37badc3</code></a> update changelog</li>
<li><a href="https://github.com/pallets/flask/commit/70f906c51ce49c485f1d355703e9cc3386b1cc2b"><code>70f906c</code></a> Merge pull request from GHSA-m2qf-hxjv-5gpq</li>
<li><a href="https://github.com/pallets/flask/commit/8705dd39c4fa563ea0fe0bf84c85da8fcc98b88d"><code>8705dd3</code></a> set <code>Vary: Cookie</code> header consistently for session</li>
<li><a href="https://github.com/pallets/flask/commit/9532cba45d2339e90ebf04f178b1e4f2064e7328"><code>9532cba</code></a> fix mypy finding</li>
<li><a href="https://github.com/pallets/flask/commit/0bc7356ce1ae11e633426902aba76d525f4523da"><code>0bc7356</code></a> start version 2.3.2</li>
<li><a href="https://github.com/pallets/flask/commit/f07fb2b607c1eaa724ca9bfe43e2dc20d97d34de"><code>f07fb2b</code></a> Merge pull request <a href="https://redirect.github.com/pallets/flask/issues/5086">#5086</a> from pallets/release-2.3.1</li>
<li>Additional commits viewable in <a href="https://github.com/pallets/flask/compare/2.0.3...2.3.2">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23094/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23094/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23094",
"html_url": "https://github.com/huggingface/transformers/pull/23094",
"diff_url": "https://github.com/huggingface/transformers/pull/23094.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23094.patch",
"merged_at": 1682986511000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23093
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23093/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23093/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23093/events
|
https://github.com/huggingface/transformers/pull/23093
| 1,691,493,634 |
PR_kwDOCUB6oc5PibuY
| 23,093 |
Merge type hints from microsoft/python-type-stubs
|
{
"login": "Avasam",
"id": 1350584,
"node_id": "MDQ6VXNlcjEzNTA1ODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1350584?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Avasam",
"html_url": "https://github.com/Avasam",
"followers_url": "https://api.github.com/users/Avasam/followers",
"following_url": "https://api.github.com/users/Avasam/following{/other_user}",
"gists_url": "https://api.github.com/users/Avasam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Avasam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Avasam/subscriptions",
"organizations_url": "https://api.github.com/users/Avasam/orgs",
"repos_url": "https://api.github.com/users/Avasam/repos",
"events_url": "https://api.github.com/users/Avasam/events{/privacy}",
"received_events_url": "https://api.github.com/users/Avasam/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23093). All of your documentation changes will be reflected on that endpoint.",
"> the tokenizers imported for Fnet, T5 or Pegasus are wrong\r\n\r\nI copied exactly what I obtained from runtime inspection (see image below). `FNetTokenizer`, `PegasusTokenizer`, `T5Tokenizer` and `RagTokenizer` have no common base class with `PreTrainedTokenizer`. Unless there's a `Protocol` I could use (or add) instead, or if you're fine with hiding that these are real potential results, or simplify the `Union` by throwing in a `type` or even an `Any` (with a comment about incomplete type),\r\n\r\n\r\n> not interested in creating dependencies on the auto module over all those new modules\r\n\r\nUpdated to not have runtime dependencies",
"Python is not a statically typed language and your runtime inspection will be different form another user's runtime inspection depending on the packages installed. Again this is way more headache that what we want to deal with and the benefits of adding type hints, so we won't merge any type hints in the auto module.",
"Since it's not possible to get accurate and useful inline generic type hints without changing the base class to an alias due to Python 3.8 support: To be reconsidered once python 3.8 support is dropped.\r\n\r\nI'll backport this to https://github.com/microsoft/python-type-stubs so they're at least accurate. Which may or may not be migrated to typeshed at some point."
] | 1,682 | 1,686 | 1,685 |
NONE
| null |
# What does this PR do?
Merge type definitions from https://github.com/microsoft/python-type-stubs/tree/main/transformers-stubs so it can be removed from Pylance.
This is also work towards #16059
I cross-checked the types with what I got at runtime.
I also ran `pyright --pythonversion=3.7` on both files to sanity check I'm not writing anything that will obviously break at runtime under Python 3.7
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section? (yes but make commands are not working on my machine)
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. (no)
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). (I guess? type hints are docs, docstrings and external docs should stay the same from my PR)
- [ ] Did you write any new necessary tests? (no, if you test using mypy/pyright this should already be picked up. Unit tests should naturally break if using syntax or imports incompatible with 3.7)
## Who can review?
🤷 The list below doesn't mention typing / type hints
I guess @Rocketknight1 who opened #16059
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23093/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23093/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23093",
"html_url": "https://github.com/huggingface/transformers/pull/23093",
"diff_url": "https://github.com/huggingface/transformers/pull/23093.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23093.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23092
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23092/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23092/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23092/events
|
https://github.com/huggingface/transformers/issues/23092
| 1,691,332,305 |
I_kwDOCUB6oc5kz67R
| 23,092 |
Simplifying Output from Text Classification Pipelines
|
{
"login": "907Resident",
"id": 12258706,
"node_id": "MDQ6VXNlcjEyMjU4NzA2",
"avatar_url": "https://avatars.githubusercontent.com/u/12258706?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/907Resident",
"html_url": "https://github.com/907Resident",
"followers_url": "https://api.github.com/users/907Resident/followers",
"following_url": "https://api.github.com/users/907Resident/following{/other_user}",
"gists_url": "https://api.github.com/users/907Resident/gists{/gist_id}",
"starred_url": "https://api.github.com/users/907Resident/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/907Resident/subscriptions",
"organizations_url": "https://api.github.com/users/907Resident/orgs",
"repos_url": "https://api.github.com/users/907Resident/repos",
"events_url": "https://api.github.com/users/907Resident/events{/privacy}",
"received_events_url": "https://api.github.com/users/907Resident/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"This is also something we cannot change without breaking the code of many many users :-)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,682 | 1,688 | 1,688 |
NONE
| null |
### Feature request
> This feature request references the content discussed in a [HF thread](https://discuss.huggingface.co/t/i-have-trained-my-classifier-now-how-do-i-do-predictions/3625).
I was just wondering if there is a particular reason why the output of the `pipe` shown above is a double list?
For instance, the output of the following:
```python
from transformers import TextClassificationPipeline
model = ...
tokenizer = ...
pipe = TextClassificationPipeline(model=model, tokenizer=tokenizer, return_all_scores=True)
# outputs a list of dicts like [[{'label': 'NEGATIVE', 'score': 0.0001223755971295759}, {'label': 'POSITIVE', 'score': 0.9998776316642761}]]
res = pipe("I love this movie!")
```
is a list that has _two_ square brackets (i.e., `[[ ... ]]`). This means the indexing process to grab say the negative score requires:
```python
neg = res[0][0]["score"]
```
This could be enhanced by simply returning a single dictionary object:
```python
res = {"label":["NEGATIVE", "POSITIVE"], "score":[0.0001, 0.9998]}
```
### Motivation
This idea came from reading a [HF discussion thread](https://discuss.huggingface.co/t/i-have-trained-my-classifier-now-how-do-i-do-predictions/3625). It was two years ago, so I did not want to reopen the conversation there.
Also, I think this is a feature addition, but feel free to correct me if I am wrong.
### Your contribution
I do not currently have plans to submit a PR, but if there is interest from the HF team, then I will take a harder look and comment here if I can make the change and submit a PR.
My initial guess is that this is not something bothers many users.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23092/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23092/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23091
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23091/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23091/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23091/events
|
https://github.com/huggingface/transformers/pull/23091
| 1,691,311,902 |
PR_kwDOCUB6oc5Phykd
| 23,091 |
DETR: changing position_embedding and key_value_position_embedding args
|
{
"login": "Lorenzobattistela",
"id": 70359945,
"node_id": "MDQ6VXNlcjcwMzU5OTQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/70359945?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Lorenzobattistela",
"html_url": "https://github.com/Lorenzobattistela",
"followers_url": "https://api.github.com/users/Lorenzobattistela/followers",
"following_url": "https://api.github.com/users/Lorenzobattistela/following{/other_user}",
"gists_url": "https://api.github.com/users/Lorenzobattistela/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Lorenzobattistela/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lorenzobattistela/subscriptions",
"organizations_url": "https://api.github.com/users/Lorenzobattistela/orgs",
"repos_url": "https://api.github.com/users/Lorenzobattistela/repos",
"events_url": "https://api.github.com/users/Lorenzobattistela/events{/privacy}",
"received_events_url": "https://api.github.com/users/Lorenzobattistela/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23091). All of your documentation changes will be reflected on that endpoint.",
"Hi,\r\n\r\nI dont know if this issue is still up.\r\n\r\nI believe you need to change the names of the files mentioned in the fixup too. Since in the paper of [Conditional DETR](https://arxiv.org/pdf/2108.06152.pdf), they also use the same nomenclature (sometimes `object queries` are also called `content queries` though) .\r\n\r\nFor example in [modeling_conditional_detr.py](https://github.com/huggingface/transformers/blob/bd469c40659ce76c81f69c7726759d249b4aef49/src/transformers/models/conditional_detr/modeling_conditional_detr.py#LL556C9-L556C28) the names of the forward function are still `position_embeddings`, so you would need to change that to `object queries` for consistency.\r\n\r\nSame applies to the other file mentioned in the fixup too. I am also new to fixing PRs in this repo, so I would leave this decision to the reviewers, but I believe it makes sense if you would like to apply the changes @Lorenzobattistela . If not, maybe another issue could be created for that.\r\n",
"@A-guridi Hey, I understand these names could be changed to keep consistency, and I am up to do this. But I don't know if this is the right to do since the issue is specific about DETR. But I'll try what you said, let's wait up the reviewers\r\n",
"@NielsRogge Could you review and confirm if it aligns with your suggestion in #19833? ",
"> Thanks for working on this, left some comments.\r\n> \r\n> Specifically, DETR's decoder uses 2 types of position embeddings:\r\n> \r\n> * the ones that are added to the inputs i.e. hidden states of each cross-attention layer (the object_queries)\r\n> * the ones that are added to the keys and values of each cross-attention layer (the spatial_position_embeddings)\r\n\r\nworking on it",
"git history got messed up, will open a new PR just with the correct changes",
"Reopened PR #24652"
] | 1,682 | 1,688 | 1,688 |
CONTRIBUTOR
| null |
# What does this PR do?
This PR refers to #19833 , and it just update some variables/docstrings names. Quoting the Issue, the paper mentions that the `position_embeddings` argument of the cross-attention layer are these input embeddings called `object queries`. And the `key_value_position_embeddings` is refered to as `spatial_position_embeddings`.
This PR is limited to DETR model.
### Notes
This is my first contribution, so I'm happy to adjust anything in this PR. I ran all tests and style, and it went all, except for one:
`make fixup`. I got the following output:

Reading the output, I assume it is about other file using classes in modeling_detr. I'll wait for updates. I will also wait for review for doc updating or more guidance.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
https://github.com/huggingface/transformers/issues/19833
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@NielsRogge
@amyeroberts
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23091/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23091/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23091",
"html_url": "https://github.com/huggingface/transformers/pull/23091",
"diff_url": "https://github.com/huggingface/transformers/pull/23091.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23091.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23090
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23090/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23090/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23090/events
|
https://github.com/huggingface/transformers/issues/23090
| 1,691,215,342 |
I_kwDOCUB6oc5kzeXu
| 23,090 |
ConvNextV2 weight not initialized
|
{
"login": "IMvision12",
"id": 88665786,
"node_id": "MDQ6VXNlcjg4NjY1Nzg2",
"avatar_url": "https://avatars.githubusercontent.com/u/88665786?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IMvision12",
"html_url": "https://github.com/IMvision12",
"followers_url": "https://api.github.com/users/IMvision12/followers",
"following_url": "https://api.github.com/users/IMvision12/following{/other_user}",
"gists_url": "https://api.github.com/users/IMvision12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IMvision12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IMvision12/subscriptions",
"organizations_url": "https://api.github.com/users/IMvision12/orgs",
"repos_url": "https://api.github.com/users/IMvision12/repos",
"events_url": "https://api.github.com/users/IMvision12/events{/privacy}",
"received_events_url": "https://api.github.com/users/IMvision12/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @IMvision12, thanks for opening the issue! This doesn't effect the output logits significantly but we pinpointed the issue and will fix it shortly."
] | 1,682 | 1,683 | 1,683 |
CONTRIBUTOR
| null |
### System Info
kaggle NoteBook:
- `transformers` version: 4.27.4
- Platform: Linux-5.15.90+-x86_64-with-debian-bullseye-sid
- Python version: 3.7.12
- Huggingface_hub version: 0.13.3
- PyTorch version (GPU?): 1.13.0+cpu (False)
- Tensorflow version (GPU?): 2.11.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.6.4 (cpu)
- Jax version: 0.3.25
- JaxLib version: 0.3.25
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sgugger @alaradirik @amyeroberts
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoImageProcessor, ConvNextV2ForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("facebook/convnextv2-atto-1k-224")
model = ConvNextV2ForImageClassification.from_pretrained("facebook/convnextv2-atto-1k-224")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])
```
output:
```
Downloading builder script: 100%
2.56k/2.56k [00:00<00:00, 120kB/s]
Downloading and preparing dataset cats_image/image to /root/.cache/huggingface/datasets/huggingface___cats_image/image/1.9.0/68fbc793fb10cd165e490867f5d61fa366086ea40c73e549a020103dcb4f597e...
Downloading data files: 100%
1/1 [00:00<00:00, 2.30it/s]
Downloading data: 100%
173k/173k [00:00<00:00, 637kB/s]
Extracting data files: 100%
1/1 [00:00<00:00, 59.25it/s]
Dataset cats_image downloaded and prepared to /root/.cache/huggingface/datasets/huggingface___cats_image/image/1.9.0/68fbc793fb10cd165e490867f5d61fa366086ea40c73e549a020103dcb4f597e. Subsequent calls will reuse this data.
100%
1/1 [00:00<00:00, 56.19it/s]
Downloading (…)rocessor_config.json: 100%
352/352 [00:00<00:00, 12.5kB/s]
Downloading (…)lve/main/config.json: 100%
69.7k/69.7k [00:00<00:00, 2.71MB/s]
Downloading pytorch_model.bin: 100%
14.9M/14.9M [00:00<00:00, 70.7MB/s]
Some weights of the model checkpoint at facebook/convnextv2-atto-1k-224 were not used when initializing ConvNextV2ForImageClassification: ['convnextv2.encoder.stages.2.layers.3.grn.weight', 'convnextv2.encoder.stages.1.layers.0.grn.weight', 'convnextv2.encoder.stages.2.layers.4.grn.weight', 'convnextv2.encoder.stages.0.layers.1.grn.bias', 'convnextv2.encoder.stages.2.layers.5.grn.bias', 'convnextv2.encoder.stages.0.layers.1.grn.weight', 'convnextv2.encoder.stages.3.layers.1.grn.weight', 'convnextv2.encoder.stages.1.layers.1.grn.weight', 'convnextv2.encoder.stages.0.layers.0.grn.weight', 'convnextv2.encoder.stages.2.layers.0.grn.bias', 'convnextv2.encoder.stages.2.layers.2.grn.bias', 'convnextv2.encoder.stages.1.layers.0.grn.bias', 'convnextv2.encoder.stages.3.layers.0.grn.weight', 'convnextv2.encoder.stages.3.layers.1.grn.bias', 'convnextv2.encoder.stages.1.layers.1.grn.bias', 'convnextv2.encoder.stages.2.layers.0.grn.weight', 'convnextv2.encoder.stages.2.layers.1.grn.weight', 'convnextv2.encoder.stages.2.layers.4.grn.bias', 'convnextv2.encoder.stages.2.layers.1.grn.bias', 'convnextv2.encoder.stages.2.layers.3.grn.bias', 'convnextv2.encoder.stages.2.layers.5.grn.weight', 'convnextv2.encoder.stages.2.layers.2.grn.weight', 'convnextv2.encoder.stages.0.layers.0.grn.bias', 'convnextv2.encoder.stages.3.layers.0.grn.bias']
- This IS expected if you are initializing ConvNextV2ForImageClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing ConvNextV2ForImageClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of ConvNextV2ForImageClassification were not initialized from the model checkpoint at facebook/convnextv2-atto-1k-224 and are newly initialized: ['convnextv2.encoder.stages.2.layers.5.grn.beta', 'convnextv2.encoder.stages.1.layers.0.grn.beta', 'convnextv2.encoder.stages.1.layers.1.grn.gamma', 'convnextv2.encoder.stages.0.layers.0.grn.beta', 'convnextv2.encoder.stages.2.layers.0.grn.gamma', 'convnextv2.encoder.stages.0.layers.1.grn.gamma', 'convnextv2.encoder.stages.3.layers.1.grn.beta', 'convnextv2.encoder.stages.2.layers.4.grn.gamma', 'convnextv2.encoder.stages.1.layers.1.grn.beta', 'convnextv2.encoder.stages.3.layers.0.grn.beta', 'convnextv2.encoder.stages.2.layers.0.grn.beta', 'convnextv2.encoder.stages.3.layers.1.grn.gamma', 'convnextv2.encoder.stages.2.layers.5.grn.gamma', 'convnextv2.encoder.stages.2.layers.3.grn.gamma', 'convnextv2.encoder.stages.2.layers.2.grn.beta', 'convnextv2.encoder.stages.2.layers.4.grn.beta', 'convnextv2.encoder.stages.2.layers.1.grn.gamma', 'convnextv2.encoder.stages.0.layers.1.grn.beta', 'convnextv2.encoder.stages.2.layers.2.grn.gamma', 'convnextv2.encoder.stages.3.layers.0.grn.gamma', 'convnextv2.encoder.stages.2.layers.1.grn.beta', 'convnextv2.encoder.stages.0.layers.0.grn.gamma', 'convnextv2.encoder.stages.2.layers.3.grn.beta', 'convnextv2.encoder.stages.1.layers.0.grn.gamma']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
racket, racquet
```
### Expected behavior
No warning should be there about initialization of weights
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23090/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23090/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23089
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23089/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23089/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23089/events
|
https://github.com/huggingface/transformers/pull/23089
| 1,691,126,522 |
PR_kwDOCUB6oc5PhJo6
| 23,089 |
[WIP] Add GC ViT model
|
{
"login": "JorgeAV-ai",
"id": 32791438,
"node_id": "MDQ6VXNlcjMyNzkxNDM4",
"avatar_url": "https://avatars.githubusercontent.com/u/32791438?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JorgeAV-ai",
"html_url": "https://github.com/JorgeAV-ai",
"followers_url": "https://api.github.com/users/JorgeAV-ai/followers",
"following_url": "https://api.github.com/users/JorgeAV-ai/following{/other_user}",
"gists_url": "https://api.github.com/users/JorgeAV-ai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JorgeAV-ai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JorgeAV-ai/subscriptions",
"organizations_url": "https://api.github.com/users/JorgeAV-ai/orgs",
"repos_url": "https://api.github.com/users/JorgeAV-ai/repos",
"events_url": "https://api.github.com/users/JorgeAV-ai/events{/privacy}",
"received_events_url": "https://api.github.com/users/JorgeAV-ai/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23089). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,682 | 1,688 | 1,688 |
NONE
| null |
# What does this PR do?
Adds to the Transformers library the GC ViT model. _still a work in progress, everything but the docs_
I did not find any PR related to this model architecture and i am really surprised, so instead of adding a new _Issue_, i will add here the information related to the model.
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
- Model paper [here](https://arxiv.org/pdf/2206.09959.pdf)
- Official Implementation [here](https://github.com/NVlabs/GCVit/)
- Timm Implementation with pretrained Weights _(small detail, weights are under a non-commercial share-alike license)_ [here](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/gcvit.py)
It is my first PR, so things are going to be slow, I'll let you know if I have any questions (I expect Github to notify me when someone responds).
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Fixes # (issue) -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@amyeroberts
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23089/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23089/timeline
| null | true |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23089",
"html_url": "https://github.com/huggingface/transformers/pull/23089",
"diff_url": "https://github.com/huggingface/transformers/pull/23089.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23089.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23088
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23088/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23088/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23088/events
|
https://github.com/huggingface/transformers/pull/23088
| 1,691,126,256 |
PR_kwDOCUB6oc5PhJk5
| 23,088 |
Generate: work around PT `multinomial` sampling 0 probability tokens
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"As discussed internally, this is a regression on the PyTorch side for 2.0, so this should be fixed by PyTorch and not by us adding some overload to `generate`.",
"(closing because of the comment above)"
] | 1,682 | 1,684 | 1,682 |
MEMBER
| null |
# What does this PR do?
Fixes #22979
As raised in [this `transformers` issue](https://github.com/huggingface/transformers/issues/22979) and [this `pytorch` issue](https://github.com/pytorch/pytorch/issues/48841), `multinomial` can erroneously pick `0` probability tokens. According to the reports and my own observations, the error is much more likely on CPU.
There is a high chance that a token with `-inf` logits is selected: in this [simple example with `top_k=40`](https://github.com/huggingface/transformers/issues/22979#issuecomment-1529770291), it happens 0.158% of the times on CPU -- or ~50% chance that a sequence with 500 newly generated tokens to have at least one token that shouldn't be there.
This PR adds a quick-and-dirty workaround, while the PT team works in the issue: at each sample step, pick 5 candidates, and keep the first valid one. Assuming independence, the probability of having one or more forbidden token in the example above drops to ~5e-10 %.
Runtime overhead: considering `distilgpt2`, a small model where operations outside the model have some weight, it got 2% slower on GPU (RTX3090) and 1% slower on CPU (Ryzen 9 5950X). On larger models, the slowdown becomes negligible.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23088/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23088/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23088",
"html_url": "https://github.com/huggingface/transformers/pull/23088",
"diff_url": "https://github.com/huggingface/transformers/pull/23088.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23088.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23087
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23087/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23087/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23087/events
|
https://github.com/huggingface/transformers/issues/23087
| 1,691,008,728 |
I_kwDOCUB6oc5kyr7Y
| 23,087 |
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.LongTensor [1, 128]] is at version 3; expected version 2 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
|
{
"login": "jordane95",
"id": 69186130,
"node_id": "MDQ6VXNlcjY5MTg2MTMw",
"avatar_url": "https://avatars.githubusercontent.com/u/69186130?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jordane95",
"html_url": "https://github.com/jordane95",
"followers_url": "https://api.github.com/users/jordane95/followers",
"following_url": "https://api.github.com/users/jordane95/following{/other_user}",
"gists_url": "https://api.github.com/users/jordane95/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jordane95/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jordane95/subscriptions",
"organizations_url": "https://api.github.com/users/jordane95/orgs",
"repos_url": "https://api.github.com/users/jordane95/repos",
"events_url": "https://api.github.com/users/jordane95/events{/privacy}",
"received_events_url": "https://api.github.com/users/jordane95/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2155169140,
"node_id": "MDU6TGFiZWwyMTU1MTY5MTQw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/trainer",
"name": "trainer",
"color": "2ef289",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] |
[
"Hey! \r\nGiven how big the reproduction script is, I'm gonna say this is probably related to the way you are wrapping the use of transformers models, and would recommend you to ask on the [forum](https://discuss.huggingface.co/) to see if anyone in the community can help you with this! \r\nI won't have time to dive into this, maybe @younesbelkada \r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi @jordane95 @ArthurZucker \r\nSadly I won\"t have time to dig into that :/ \r\n@jordane95 do you still face the issue on the main branch of transformers?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"> Hi @jordane95 @ArthurZucker Sadly I won\"t have time to dig into that :/ @jordane95 do you still face the issue on the main branch of transformers?\r\n\r\nYeah, this seems to be a problem involved with the siamese architecture? Althogh I can avoid this error by moving loss computation operations in `compute_loss` function of the trainer class to the `forward` function of model class, I'm still curious why this error occurs.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@jordane95 any idea of what happened in this error? Thanks"
] | 1,682 | 1,707 | 1,692 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.13.0-27-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.15
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.10.1+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sgugger @ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I want to train a embedding-based retrieval qa system by minimizing the contrastive loss of correct (q,a) pairs against in-batch negatives. I also want it to be run on multiple gpus. But I run into the problem of backward propagation in position embedding layer of BERT (which I infer from the error log) when runing in distributed manner. I don't know where is broken (trainer? BertModel? pytorch?)
btw, the code works in single gpu setting
Command that I ran:
```bash
torchrun --nproc_per_node 2 retrieval_qa.py \
--model_name_or_path bert-base-uncased \
--output_dir debug \
--max_steps 10000 \
--remove_unused_columns False \
--learning_rate 5e-5 \
--logging_steps 10 \
--save_steps 500 \
--warmup_ratio 0.0 \
--per_device_train_batch_size 16 \
--normalize True
```
Error details:
```bash
***** Running training *****
Num examples = 20360
Num Epochs = 16
Instantaneous batch size per device = 16
Total train batch size (w. parallel, distributed & accumulation) = 32
Gradient Accumulation steps = 1
Total optimization steps = 10000
Number of trainable parameters = 109482240
0%| | 0/10000 [00:00<?, ?it/s][W python_anomaly_mode.cpp:104] Warning: Error detected in EmbeddingBackward0. Traceback of forward call that caused the error:
File "retrieval_qa.py", line 213, in <module>
main()
File "retrieval_qa.py", line 209, in main
trainer.train()
File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/transformers/trainer.py", line 1531, in train
ignore_keys_for_eval=ignore_keys_for_eval,
File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/transformers/trainer.py", line 1775, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/transformers/trainer.py", line 2523, in training_step
loss = self.compute_loss(model, inputs)
File "retrieval_qa.py", line 142, in compute_loss
token_type_ids=inputs[k]['token_type_ids'],
File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 886, in forward
output = self.module(*inputs[0], **kwargs[0])
File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "retrieval_qa.py", line 103, in forward
model_output = self.model(**kwargs)
File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/transformers/models/bert/modeling_bert.py", line 1019, in forward
past_key_values_length=past_key_values_length,
File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/transformers/models/bert/modeling_bert.py", line 236, in forward
position_embeddings = self.position_embeddings(position_ids)
File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 160, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/functional.py", line 2044, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
(function _print_stack)
[W python_anomaly_mode.cpp:104] Warning: Error detected in EmbeddingBackward0. Traceback of forward call that caused the error:
File "retrieval_qa.py", line 213, in <module>
main()
File "retrieval_qa.py", line 209, in main
trainer.train()
File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/transformers/trainer.py", line 1531, in train
ignore_keys_for_eval=ignore_keys_for_eval,
File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/transformers/trainer.py", line 1775, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/transformers/trainer.py", line 2523, in training_step
loss = self.compute_loss(model, inputs)
File "retrieval_qa.py", line 142, in compute_loss
token_type_ids=inputs[k]['token_type_ids'],
File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 886, in forward
output = self.module(*inputs[0], **kwargs[0])
File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "retrieval_qa.py", line 103, in forward
model_output = self.model(**kwargs)
File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/transformers/models/bert/modeling_bert.py", line 1019, in forward
past_key_values_length=past_key_values_length,
File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/transformers/models/bert/modeling_bert.py", line 236, in forward
position_embeddings = self.position_embeddings(position_ids)
File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 160, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/functional.py", line 2044, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
(function _print_stack)
Traceback (most recent call last):
File "retrieval_qa.py", line 213, in <module>
main()
File "retrieval_qa.py", line 209, in main
trainer.train()
File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/transformers/trainer.py", line 1531, in train
ignore_keys_for_eval=ignore_keys_for_eval,
File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/transformers/trainer.py", line 1775, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/transformers/trainer.py", line 2541, in training_step
Traceback (most recent call last):
File "retrieval_qa.py", line 213, in <module>
loss.backward()
File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/_tensor.py", line 307, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/autograd/__init__.py", line 156, in backward
allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.LongTensor [1, 128]] is at version 3; expected version 2 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
```
Source code of `retrieval_qa.py`
```Python
import logging
import os
import sys
from typing import Dict, List, Tuple, Optional, Any, Union
import torch
from torch import nn
from torch.nn import functional as F
from transformers import AutoConfig, AutoModel, AutoTokenizer
from transformers import (
HfArgumentParser,
set_seed,
)
import os
from dataclasses import dataclass, field
from typing import Optional, List
from transformers import TrainingArguments
from transformers import DataCollatorWithPadding
from transformers.trainer import Trainer
import logging
logger = logging.getLogger(__name__)
# Name of the files used for checkpointing
TRAINING_ARGS_NAME = "training_args.bin"
TRAINER_STATE_NAME = "trainer_state.json"
OPTIMIZER_NAME = "optimizer.pt"
SCHEDULER_NAME = "scheduler.pt"
SCALER_NAME = "scaler.pt"
@dataclass
class ModelArguments:
model_name_or_path: str = field(
metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"}
)
config_name: Optional[str] = field(
default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"}
)
tokenizer_name: Optional[str] = field(
default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"}
)
cache_dir: Optional[str] = field(
default=None, metadata={"help": "Where do you want to store the pretrained models downloaded from s3"}
)
normalize: bool = field(default=False)
pooling: str = field(default='mean')
@dataclass
class QPCollator(DataCollatorWithPadding):
"""
Wrapper that does conversion from List[Tuple[encode_qry, encode_psg]] to List[qry], List[psg]
and pass batch separately to the actual collator.
Abstract out data detail for the model.
"""
max_q_len: int = 32
max_p_len: int = 128
def __call__(self, features):
keys = list(features[0].keys())
collated_batch = {}
for key in keys:
if not isinstance(features[0][key], str):
continue
text = [f[key] for f in features]
# print(text)
text_batch = self.tokenizer(
text,
padding='max_length',
truncation=True,
max_length=self.max_p_len,
return_tensors="pt",
)
collated_batch[key] = text_batch
return collated_batch
class AutoModelForSentenceEmbedding(nn.Module):
def __init__(
self,
model_name_or_path,
tokenizer=None,
pooling='cls',
normalize=True,
):
super(AutoModelForSentenceEmbedding, self).__init__()
self.model = AutoModel.from_pretrained(model_name_or_path)
self.tokenizer = tokenizer if tokenizer else AutoTokenizer.from_pretrained(model_name_or_path)
self.pooling = pooling
self.normalize = normalize
def forward(self, **kwargs):
model_output = self.model(**kwargs)
embeddings = self.mean_pooling(model_output, kwargs['attention_mask'])
if self.normalize:
embeddings = F.normalize(embeddings, p=2, dim=1)
return embeddings
def mean_pooling(self, model_output, attention_mask):
token_embeddings = model_output[0] # First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
def save_pretrained(self, output_path):
self.model.save_pretrained(output_path)
class EmbeddingTrainer(Trainer):
def _save(self, output_dir: Optional[str] = None, state_dict=None):
# If we are executing this function, we are the process zero, so we don't check for that.
output_dir = output_dir if output_dir is not None else self.args.output_dir
os.makedirs(output_dir, exist_ok=True)
logger.info(f"Saving model checkpoint to {output_dir}")
self.model.save_pretrained(output_dir)
if self.tokenizer is not None:
self.tokenizer.save_pretrained(output_dir)
# Good practice: save your training arguments together with the trained model
torch.save(self.args, os.path.join(output_dir, TRAINING_ARGS_NAME))
def compute_loss(self, model, inputs, return_outputs=False):
all_embeddings = {}
for k in ['question', 'answer']:
all_embeddings[k] = model(
input_ids=inputs[k]['input_ids'],
attention_mask=inputs[k]['attention_mask'],
token_type_ids=inputs[k]['token_type_ids'],
)
embeddings_query = all_embeddings['question']
embeddings_pos = all_embeddings['answer']
scores = embeddings_query @ embeddings_pos.T
labels = torch.arange(0, embeddings_query.shape[0], dtype=torch.long, device=embeddings_query.device)
self.cross_entropy = torch.nn.CrossEntropyLoss(reduction='mean')
loss = self.cross_entropy(scores, labels)
return loss
def main():
parser = HfArgumentParser((ModelArguments, TrainingArguments))
if len(sys.argv) == 2 and sys.argv[1].endswith(".json"):
model_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))
else:
model_args, training_args = parser.parse_args_into_dataclasses()
model_args: ModelArguments
training_args: TrainingArguments
if (
os.path.exists(training_args.output_dir)
and os.listdir(training_args.output_dir)
and training_args.do_train
and not training_args.overwrite_output_dir
):
raise ValueError(
f"Output directory ({training_args.output_dir}) already exists and is not empty. Use --overwrite_output_dir to overcome."
)
# Setup logging
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
level=logging.INFO if training_args.local_rank in [-1, 0] else logging.WARN,
)
set_seed(training_args.seed)
tokenizer = AutoTokenizer.from_pretrained(
model_args.model_name_or_path,
cache_dir=model_args.cache_dir
)
model = AutoModelForSentenceEmbedding(
model_args.model_name_or_path,
pooling=model_args.pooling,
normalize=model_args.normalize,
)
from datasets import load_dataset
wq = load_dataset('wiki_qa', split='train')
train_dataset = wq.remove_columns('label')
data_collator = QPCollator(tokenizer=tokenizer)
torch.autograd.set_detect_anomaly(True)
trainer = EmbeddingTrainer(
model=model,
args=training_args,
train_dataset=train_dataset,
data_collator=data_collator,
tokenizer=tokenizer,
)
trainer.train()
if __name__ == "__main__":
main()
```
### Expected behavior
Currently there is no problem on single gpu.
I want this code to run normally on multi-gpus. But it seems somewhere is broken...
It's hard to find where the problem is cause I'm not super familar with how pytorch/trainer/bertmodel works in distributed manner...
Could you help me? Thanks!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23087/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23087/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23086
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23086/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23086/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23086/events
|
https://github.com/huggingface/transformers/issues/23086
| 1,690,900,444 |
I_kwDOCUB6oc5kyRfc
| 23,086 |
VideoMAEForVideoClassification does not support `device_map='auto'` yet.
|
{
"login": "MichaelRipa",
"id": 51883134,
"node_id": "MDQ6VXNlcjUxODgzMTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/51883134?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MichaelRipa",
"html_url": "https://github.com/MichaelRipa",
"followers_url": "https://api.github.com/users/MichaelRipa/followers",
"following_url": "https://api.github.com/users/MichaelRipa/following{/other_user}",
"gists_url": "https://api.github.com/users/MichaelRipa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MichaelRipa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MichaelRipa/subscriptions",
"organizations_url": "https://api.github.com/users/MichaelRipa/orgs",
"repos_url": "https://api.github.com/users/MichaelRipa/repos",
"events_url": "https://api.github.com/users/MichaelRipa/events{/privacy}",
"received_events_url": "https://api.github.com/users/MichaelRipa/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @alaradirik and @amyeroberts ",
"Hi I would like give an update about this question: the questions was solved by manually setting device_map in accelerate.\r\n/\r\nHi, thanks for the commitment. I tested with this change, but there is still a bug which made me very confusing: \r\n```python\r\nRuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemmStridedBatched( handle, opa, opb, m, n, k, &alpha, a, lda, stridea, b, ldb, strideb, &beta, c, ldc, stridec, num_batches)`\r\n``` \r\nMy codes can run on one GPU, and I met this error when I run it on two GPUs with this change. \r\nI already checked the compatibility of CUDA, pytorch etc...Also did small tests about training on other small dataset with two GPUs by simple pytorch codes. They all worked. I even set the batch_size=1, the error is still there...\r\nIf you have any idea about this error, I am really appreciated. \r\n",
"cc @rafaelpadilla if you can take over the PR would be nice 😉 ",
"I'm on it. I will take a look in this issue.",
"Hi @shiliu0111 ,\r\n\r\nThank you for opening this issue.\r\nI tried to reproduce your error, but everything seems to be working on my side. I tried loading the model by setting \r\n`device_map=accelerator.device`.\r\n\r\nCould you please provide a code snippet that would allow me to replicate your error? Also kindly share the contents of your `accelerate/default_config.yaml`, so I can use the same configuration as you have.\r\n",
"Hi @rafaelpadilla, \r\n\r\nThanks for looking into this problem. I have solved this by manually setting up device_map. The codes now work very well.\r\n \r\n> Hi @shiliu0111 ,\r\n> \r\n> Thank you for opening this issue. I tried to reproduce your error, but everything seems to be working on my side. I tried loading the model by setting `device_map=accelerator.device`.\r\n> \r\n> Could you please provide a code snippet that would allow me to replicate your error? Also kindly share the contents of your `accelerate/default_config.yaml`, so I can use the same configuration as you have.\r\n\r\n",
"Hi @shiliu0111 ,\r\n\r\nHappy to hear the problem was solved 😀\r\nI will close this issue for now. You can reopen it in case you encounter any related problems."
] | 1,682 | 1,695 | 1,695 |
CONTRIBUTOR
| null |
### Feature request
Support for `device_map = 'auto'` so that the VideoMAE models can be run with Int8 mixed precision. For reproducibility, here is what I get when I run the command in a collab notebook (w/ GPU) with accelerate and bitsandbytes installed:
```
from transformers import AutoModelForVideoClassification
model_name = 'MCG-NJU/videomae-base-finetuned-ssv2 #Example checkpoint
model = AutoModelForVideoClassification.from_pretrained(model_name,load_in_8bit=True,device_map='auto')
```
Which gives the following error message:
```
Overriding torch_dtype=None with `torch_dtype=torch.float16` due to requirements of `bitsandbytes` to enable model loading in mixed int8. Either pass torch_dtype=torch.float16 or don't pass this argument at all to remove this warning.
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ in <cell line: 4>:4 │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py:471 in │
│ from_pretrained │
│ │
│ 468 │ │ │ ) │
│ 469 │ │ elif type(config) in cls._model_mapping.keys(): │
│ 470 │ │ │ model_class = _get_model_class(config, cls._model_mapping) │
│ ❱ 471 │ │ │ return model_class.from_pretrained( │
│ 472 │ │ │ │ pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, │
│ 473 │ │ │ ) │
│ 474 │ │ raise ValueError( │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py:2703 in from_pretrained │
│ │
│ 2700 │ │ │ ) │
│ 2701 │ │ │ │
│ 2702 │ │ │ if model._no_split_modules is None: │
│ ❱ 2703 │ │ │ │ raise ValueError(f"{model.__class__.__name__} does not support `device_m │
│ 2704 │ │ │ no_split_modules = model._no_split_modules │
│ 2705 │ │ │ if device_map not in ["auto", "balanced", "balanced_low_0", "sequential"]: │
│ 2706 │ │ │ │ raise ValueError( │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ValueError: VideoMAEForVideoClassification does not support `device_map='auto'` yet.
```
### Motivation
I saw a similar issue #22018 which got resolved really quickly. Hoping that this won't be a lot of work to incorperate into the VideoMAE models :slightly_smiling_face:
### Your contribution
Would prefer if someone more familiar with the repo did this instead (it doesn't appear to be much work if the update is like #22207 but I didn't understand what the change did and don't currently have time to study the codebase)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23086/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23086/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23085
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23085/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23085/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23085/events
|
https://github.com/huggingface/transformers/pull/23085
| 1,690,800,787 |
PR_kwDOCUB6oc5PgEk5
| 23,085 |
Depricate xpu_backend for ddp_backend
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2107554019,
"node_id": "MDU6TGFiZWwyMTA3NTU0MDE5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Distributed%20Training%20/%20Models",
"name": "Distributed Training / Models",
"color": "fef2c0",
"default": false,
"description": ""
},
{
"id": 2155169140,
"node_id": "MDU6TGFiZWwyMTU1MTY5MTQw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/trainer",
"name": "trainer",
"color": "2ef289",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
# What does this PR do?
This PR depricates the `xpu_backend` training argument for a new `ddp_backend` argument that can be passed to the `AcceleratorState` directly when desired/appropriate.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23085/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23085/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23085",
"html_url": "https://github.com/huggingface/transformers/pull/23085",
"diff_url": "https://github.com/huggingface/transformers/pull/23085.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23085.patch",
"merged_at": 1682948688000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23084
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23084/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23084/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23084/events
|
https://github.com/huggingface/transformers/issues/23084
| 1,690,666,445 |
I_kwDOCUB6oc5kxYXN
| 23,084 |
A potential bug here found in `BeamSearchScorer.process`
|
{
"login": "ZachVec",
"id": 75198239,
"node_id": "MDQ6VXNlcjc1MTk4MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75198239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZachVec",
"html_url": "https://github.com/ZachVec",
"followers_url": "https://api.github.com/users/ZachVec/followers",
"following_url": "https://api.github.com/users/ZachVec/following{/other_user}",
"gists_url": "https://api.github.com/users/ZachVec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZachVec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZachVec/subscriptions",
"organizations_url": "https://api.github.com/users/ZachVec/orgs",
"repos_url": "https://api.github.com/users/ZachVec/repos",
"events_url": "https://api.github.com/users/ZachVec/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZachVec/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey @ZachVec -- I believe you are correct, the implementation is incorrect for batch size > 1. I'll open a PR to fix it :)",
"Should be fixed now 🤗 "
] | 1,682 | 1,683 | 1,683 |
NONE
| null |
### System Info
System doesn't matter.
### Who can help?
@gante
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am new to the transformer and reading the source code of the beam search in `src/transformers/generation/beam_search.py`. And I have a question about the code here: https://github.com/huggingface/transformers/blob/main/src/transformers/generation/beam_search.py#L290
I noticed in PR #21993, the variable `cur` is increased by one before checking if beam hyp is done, but **this variable is increased in a loop**. In other words, this variable would be increased for each sample in this batch. I wonder if there is any particular reason for that.
### Expected behavior
```python
def process(
self,
input_ids: torch.LongTensor,
next_scores: torch.FloatTensor,
next_tokens: torch.LongTensor,
next_indices: torch.LongTensor,
pad_token_id: Optional[int] = None,
eos_token_id: Optional[Union[int, List[int]]] = None,
beam_indices: Optional[torch.LongTensor] = None,
) -> Tuple[torch.Tensor]:
cur_len = input_ids.shape[-1] + 1 # the one should be add up here, instead of in the loop.
# some code here.
for batch_idx, beam_hyp in enumerate(self._beam_hyps):
# some code here.
self._done[batch_idx] = self._done[batch_idx] or beam_hyp.is_done(
next_scores[batch_idx].max().item(), cur_len
)
# some code here.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23084/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23084/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23083
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23083/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23083/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23083/events
|
https://github.com/huggingface/transformers/pull/23083
| 1,690,629,784 |
PR_kwDOCUB6oc5PfgN-
| 23,083 |
Fix string syntax error in logger warning message (additional comma)
|
{
"login": "xwen99",
"id": 48824317,
"node_id": "MDQ6VXNlcjQ4ODI0MzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48824317?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xwen99",
"html_url": "https://github.com/xwen99",
"followers_url": "https://api.github.com/users/xwen99/followers",
"following_url": "https://api.github.com/users/xwen99/following{/other_user}",
"gists_url": "https://api.github.com/users/xwen99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xwen99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xwen99/subscriptions",
"organizations_url": "https://api.github.com/users/xwen99/orgs",
"repos_url": "https://api.github.com/users/xwen99/repos",
"events_url": "https://api.github.com/users/xwen99/events{/privacy}",
"received_events_url": "https://api.github.com/users/xwen99/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This warning message was introduced in this PR: https://github.com/huggingface/transformers/pull/21707, but one additional comma exists in the message string: https://github.com/huggingface/transformers/blob/7f4f8b97d03a16f89737ebda4386411f47f4f104/src/transformers/models/blip_2/modeling_blip_2.py#L1607-L1613
Which could cause the following error message as the second part is parsed as an additional argument:
```
File "/newdata/xinwen/miniconda3/lib/python3.10/site-packages/transformers/models/blip_2/modeling_blip_2.py", line 1626, in _preprocess_accelerate
logger.warning(
Message: 'The `language_model` is not in the `hf_device_map` dictionary and you are running your script in a multi-GPU environment. this may lead to unexpected behavior when using `accelerate`. Please pass a `device_map` that contains `language_model` to remove this warning. Please refer to https://github.com/huggingface/blog/blob/main/accelerate-large-models.md for'
Arguments: (' more details on creating a `device_map` for large models.',)
```
This PR fixes this issue by simply replacing this additional comma.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. https://huggingface.co/google/flan-ul2/discussions/6#643a02e5623c970188059c17
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed.
cc @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23083/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23083/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23083",
"html_url": "https://github.com/huggingface/transformers/pull/23083",
"diff_url": "https://github.com/huggingface/transformers/pull/23083.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23083.patch",
"merged_at": 1682946857000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23082
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23082/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23082/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23082/events
|
https://github.com/huggingface/transformers/pull/23082
| 1,690,572,310 |
PR_kwDOCUB6oc5PfTvN
| 23,082 |
Add support for beam search's num_return_sequencs flag in flax
|
{
"login": "mayankagarwals",
"id": 39498938,
"node_id": "MDQ6VXNlcjM5NDk4OTM4",
"avatar_url": "https://avatars.githubusercontent.com/u/39498938?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mayankagarwals",
"html_url": "https://github.com/mayankagarwals",
"followers_url": "https://api.github.com/users/mayankagarwals/followers",
"following_url": "https://api.github.com/users/mayankagarwals/following{/other_user}",
"gists_url": "https://api.github.com/users/mayankagarwals/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mayankagarwals/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mayankagarwals/subscriptions",
"organizations_url": "https://api.github.com/users/mayankagarwals/orgs",
"repos_url": "https://api.github.com/users/mayankagarwals/repos",
"events_url": "https://api.github.com/users/mayankagarwals/events{/privacy}",
"received_events_url": "https://api.github.com/users/mayankagarwals/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"CC @gianlucadetommaso and @gante ",
"_The documentation is not available anymore as the PR was closed or merged._",
"@mayankagarwals to make our CI green, you will likely need to rebase with `main`, then run `make fixup`, then commit. \r\n\r\nAlso, tag the PR as ready when you're ready for a final check from a core maintainer :)",
"Hey @gante ,\r\n\r\nYes! Have done those. Wanted to get your views on it before cleaning up the PR. The CI is green now. \r\n\r\nI couldn't find any specific test for this so didn't add but the following script serves as a decent test for functional purposes\r\n```\r\nfrom transformers import FlaxGPT2LMHeadModel, GPT2Tokenizer\r\n\r\ntokenizer = GPT2Tokenizer.from_pretrained(\"gpt2\")\r\nmodel = FlaxGPT2LMHeadModel.from_pretrained(\"gpt2\", pad_token_id=tokenizer.eos_token_id)\r\n\r\ninput_ids = tokenizer.encode('I enjoy walking with my cute dog', return_tensors='jax')\r\n\r\nbeam_output = model.generate(\r\n input_ids,\r\n max_length=50,\r\n num_beams=5,\r\n no_repeat_ngram_size=2,\r\n num_return_sequences=2,\r\n early_stopping=True\r\n)\r\nprint(\"All generated hypotheses:\\n\")\r\nfor sequence in beam_output.sequences.tolist():\r\n print(tokenizer.decode(sequence, skip_special_tokens=True))\r\n print(\"-------\")\r\n \r\n```\r\n \r\n \r\n Output before the change: \r\n \r\n```\r\nAll generated hypotheses:\r\n\r\nI enjoy walking with my cute dog, but I'm not sure if I'll ever be able to walk with him again.\r\n\r\nI'm not sure if I'll ever be able to walk with him again. I'm not sure if I'll\r\n-------\r\n```\r\n\r\nOutput after change: \r\n```\r\nAll generated hypotheses:\r\n\r\nI enjoy walking with my cute dog, but I'm not sure if I'll ever be able to walk with him again.\r\n\r\nI'm not sure if I'll ever be able to walk with him again. I'm not sure if I'll\r\n-------\r\nI enjoy walking with my cute dog, but I'm not sure if I'll ever be able to walk with him again.\r\n\r\nI'm not sure if I'll ever be able to walk with him again.\r\n\r\nI'm not sure if\r\n-------\r\n```",
"> But augment it to return N beams and verify we get the sequences/scores for these beams\r\n\r\nSure, have added the test. Thanks for the reference, made it a 5-minute work :p @sanchit-gandhi @gante ",
"ready for a final check, tagging a core maintainer"
] | 1,682 | 1,683 | 1,683 |
CONTRIBUTOR
| null |
Fixes part of https://github.com/huggingface/transformers/issues/22696
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23082/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23082/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23082",
"html_url": "https://github.com/huggingface/transformers/pull/23082",
"diff_url": "https://github.com/huggingface/transformers/pull/23082.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23082.patch",
"merged_at": 1683125434000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23081
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23081/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23081/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23081/events
|
https://github.com/huggingface/transformers/issues/23081
| 1,690,552,268 |
I_kwDOCUB6oc5kw8fM
| 23,081 |
GPTNeoXAttention does not deal with odd numbers of attention heads
|
{
"login": "peter-sk",
"id": 6168908,
"node_id": "MDQ6VXNlcjYxNjg5MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/6168908?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/peter-sk",
"html_url": "https://github.com/peter-sk",
"followers_url": "https://api.github.com/users/peter-sk/followers",
"following_url": "https://api.github.com/users/peter-sk/following{/other_user}",
"gists_url": "https://api.github.com/users/peter-sk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/peter-sk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/peter-sk/subscriptions",
"organizations_url": "https://api.github.com/users/peter-sk/orgs",
"repos_url": "https://api.github.com/users/peter-sk/repos",
"events_url": "https://api.github.com/users/peter-sk/events{/privacy}",
"received_events_url": "https://api.github.com/users/peter-sk/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey! Thanks for reporting!Feel free to open a PR that raises and error when the hidden size is not divisible by the number of heads. This indeed should not happen. (Can probably be checked in the config)"
] | 1,682 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
### System Info
transformers 4.29, HEAD
Linux (not relevant, reproducible on e.g. Mac OS)
python 3.10.11 (not relevant, reproducible on e.g. python 3.9.13)
### Who can help?
@ArthurZucker
@younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When the hidden size is not evenly divisible by the number of attention heads, GPTNeoXAttention throws an exception while trying to reshape its state.
Here is a minimal example to reproduce:
```
from transformers import pipeline
p=pipeline(model="Isotonic/gpt_neox_225M",task="text-generation")
p("I like to eat ")
```
The exception is the following:
```
/home/jps/anaconda3/envs/transformers/lib/python3.10/site-packages/transformers/models/gpt_neox/modeling_gpt_neox.py:133 in forward â
RuntimeError: shape '[1, 5, 12, 255]' is invalid for input of size 15360
```
### Expected behavior
What happens here is that the model has 12 attention heads and a hidden size of 1024. Thus, head_size is calculated as 1024 // 12 == 85. Then, 3 * head-size is not 256 but 255, giving the issue here.
I am not quite sure how this is supposed to work. In https://github.com/EleutherAI/gpt-neox, the code checks that the hidden size is evenly divisible by the number of heads. This would not enable the use of this model, but it might give a better error message.
Is there any chance to run such a model?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23081/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23081/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23080
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23080/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23080/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23080/events
|
https://github.com/huggingface/transformers/pull/23080
| 1,690,173,705 |
PR_kwDOCUB6oc5Pd8yv
| 23,080 |
Fix grammar error in summarization pipeline
|
{
"login": "SKaplanOfficial",
"id": 7865925,
"node_id": "MDQ6VXNlcjc4NjU5MjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7865925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SKaplanOfficial",
"html_url": "https://github.com/SKaplanOfficial",
"followers_url": "https://api.github.com/users/SKaplanOfficial/followers",
"following_url": "https://api.github.com/users/SKaplanOfficial/following{/other_user}",
"gists_url": "https://api.github.com/users/SKaplanOfficial/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SKaplanOfficial/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SKaplanOfficial/subscriptions",
"organizations_url": "https://api.github.com/users/SKaplanOfficial/orgs",
"repos_url": "https://api.github.com/users/SKaplanOfficial/repos",
"events_url": "https://api.github.com/users/SKaplanOfficial/events{/privacy}",
"received_events_url": "https://api.github.com/users/SKaplanOfficial/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you very much ! "
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes a minor grammar error I noticed while using the summarization pipeline.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed.
- pipelines: @Narsil
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23080/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23080/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23080",
"html_url": "https://github.com/huggingface/transformers/pull/23080",
"diff_url": "https://github.com/huggingface/transformers/pull/23080.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23080.patch",
"merged_at": 1682945697000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23079
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23079/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23079/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23079/events
|
https://github.com/huggingface/transformers/issues/23079
| 1,690,066,113 |
I_kwDOCUB6oc5kvFzB
| 23,079 |
Trainer doesn't run `compute_metrics` when a `torch.compile` model is passed.
|
{
"login": "davidgilbertson",
"id": 4443482,
"node_id": "MDQ6VXNlcjQ0NDM0ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidgilbertson",
"html_url": "https://github.com/davidgilbertson",
"followers_url": "https://api.github.com/users/davidgilbertson/followers",
"following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}",
"gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions",
"organizations_url": "https://api.github.com/users/davidgilbertson/orgs",
"repos_url": "https://api.github.com/users/davidgilbertson/repos",
"events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}",
"received_events_url": "https://api.github.com/users/davidgilbertson/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Another thing I just discovered, it also doesn't complete `save_pretrained()` correctly. Is saves _something_, but there isn't a `config.json` file in there. I'm guessing it's the line in `_save()` starting with `if not isinstance(self.model, PreTrainedModel)...`\r\n\r\nAgain, I know now I can do `torch_compile`, but I reckon this is going to sting lots of users as they try to pass in a compile model with the understanding that it \"just works\" without any code changes.",
"You shouldn't pass a `torch.compile`-d model to the Trainer, but let the Trainer do the compilation itself.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,682 | 1,686 | 1,686 |
NONE
| null |
### System Info
- `transformers` version: 4.28.1
- Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
- Python version: 3.10.8
- Huggingface_hub version: 0.12.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Run training with evaluation that has a `compute_metrics` function defined.
When passing the model to `Trainer`, pass a `torch.compile()` wrapped model.
In `Trainer.__init__()` there's the line `default_label_names = find_labels(self.model.__class__)` but the model class is `torch._dynamo.eval_frame.OptimizedModule` so no labels are assigned and this has the side effect of `compute_metrics` not being run.
It would be great if this checked for this case and got the correct model, or just threw a warning. I'm guessing lots of people are going to come across this as Torch 2.0 gains traction.
I only realised _after_ chasing down the cause of evaluation not running that I can pass `torch_compile=True`, so this bug no longer affects me.
### Expected behavior
Works the same as passing a non-wrapped model.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23079/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23079/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23078
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23078/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23078/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23078/events
|
https://github.com/huggingface/transformers/pull/23078
| 1,690,047,505 |
PR_kwDOCUB6oc5Pdizg
| 23,078 |
Fix `convnext` __init__
|
{
"login": "IMvision12",
"id": 88665786,
"node_id": "MDQ6VXNlcjg4NjY1Nzg2",
"avatar_url": "https://avatars.githubusercontent.com/u/88665786?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IMvision12",
"html_url": "https://github.com/IMvision12",
"followers_url": "https://api.github.com/users/IMvision12/followers",
"following_url": "https://api.github.com/users/IMvision12/following{/other_user}",
"gists_url": "https://api.github.com/users/IMvision12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IMvision12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IMvision12/subscriptions",
"organizations_url": "https://api.github.com/users/IMvision12/orgs",
"repos_url": "https://api.github.com/users/IMvision12/repos",
"events_url": "https://api.github.com/users/IMvision12/events{/privacy}",
"received_events_url": "https://api.github.com/users/IMvision12/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,682 | 1,683 | 1,682 |
CONTRIBUTOR
| null |
# What does this PR do
Fix
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23078/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23078/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23078",
"html_url": "https://github.com/huggingface/transformers/pull/23078",
"diff_url": "https://github.com/huggingface/transformers/pull/23078.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23078.patch",
"merged_at": 1682948202000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23077
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23077/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23077/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23077/events
|
https://github.com/huggingface/transformers/issues/23077
| 1,690,019,033 |
I_kwDOCUB6oc5ku6TZ
| 23,077 |
[i18n-<languageCode>] Translating docs to <languageName>
|
{
"login": "johnihsususjd",
"id": 132223428,
"node_id": "U_kgDOB-GRxA",
"avatar_url": "https://avatars.githubusercontent.com/u/132223428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/johnihsususjd",
"html_url": "https://github.com/johnihsususjd",
"followers_url": "https://api.github.com/users/johnihsususjd/followers",
"following_url": "https://api.github.com/users/johnihsususjd/following{/other_user}",
"gists_url": "https://api.github.com/users/johnihsususjd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/johnihsususjd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johnihsususjd/subscriptions",
"organizations_url": "https://api.github.com/users/johnihsususjd/orgs",
"repos_url": "https://api.github.com/users/johnihsususjd/repos",
"events_url": "https://api.github.com/users/johnihsususjd/events{/privacy}",
"received_events_url": "https://api.github.com/users/johnihsususjd/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
closed
| false | null |
[] |
[] | 1,682 | 1,682 | 1,682 |
NONE
| null |
<!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the <languageName>-speaking community 🌐 (currently 0 out of 267 complete)
Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
* Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger for review.
* 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/).
## Get Started section
- [ ] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx) https://github.com/huggingface/transformers/pull/20180
- [ ] [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx) (waiting for initial PR to go through)
- [ ] [installation.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.mdx).
## Tutorial section
- [ ] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.mdx)
- [ ] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx)
- [ ] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.mdx)
- [ ] [training.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.mdx)
- [ ] [accelerate.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.mdx)
- [ ] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.mdx)
- [ ] [multilingual.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.mdx)
<!--
Keep on adding more as you go 🔥
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23077/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23077/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23076
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23076/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23076/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23076/events
|
https://github.com/huggingface/transformers/issues/23076
| 1,689,949,984 |
I_kwDOCUB6oc5kupcg
| 23,076 |
Unable to compare versions for numpy>=1.17: need=1.17 found=None.
|
{
"login": "Hanochhu",
"id": 52187558,
"node_id": "MDQ6VXNlcjUyMTg3NTU4",
"avatar_url": "https://avatars.githubusercontent.com/u/52187558?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hanochhu",
"html_url": "https://github.com/Hanochhu",
"followers_url": "https://api.github.com/users/Hanochhu/followers",
"following_url": "https://api.github.com/users/Hanochhu/following{/other_user}",
"gists_url": "https://api.github.com/users/Hanochhu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hanochhu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hanochhu/subscriptions",
"organizations_url": "https://api.github.com/users/Hanochhu/orgs",
"repos_url": "https://api.github.com/users/Hanochhu/repos",
"events_url": "https://api.github.com/users/Hanochhu/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hanochhu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Updating transformers to the latest version should fix the problem. \r\nYou can run:\r\n`pip install --upgrade transformers`\r\nto update Transformers to the latest version.",
"> I have tried conda update transformers , but when it finished ,there is no error and the version didn't change. still is 4.18.0. \r\nThen, I also tried the following command,nothing changed\r\n```\r\n~$ conda install transformers==4.28.1\r\nCollecting package metadata (current_repodata.json): done\r\nSolving environment: done\r\n# All requested packages already installed.\r\n\r\n$ conda list transformers\r\n# packages in environment at /home/miniconda3:\r\n#\r\n# Name Version Build Channel\r\nsentence-transformers 2.2.2 pypi_0 pypi\r\ntransformers 4.18.0 pypi_0 pypi \r\n```",
"It seems that the conda channel has not been updated, hence it pulls in the old version.\r\nCan you try running:\r\n`conda install -c huggingface transformers`\r\n\r\n\r\nConda environments also support installs using pip, so you could also run:\r\n```bash\r\nconda install pip\r\npip install --upgrade transformers\r\n``` ",
"**I have update transformers to latest version** \r\n```\r\n$ conda list transformers \r\n# packages in environment at /home/miniconda3: \r\n# \r\n# Name Version Build Channel \r\nsentence-transformers 2.2.2 pypi_0 pypi \r\ntransformers 4.28.1 py_0 huggingface \r\n```\r\n**But the problem still** \r\n```\r\nTraceback (most recent call last):\r\n File \"/home/hyx/hhq/hugging_face/test.py\", line 1, in <module>\r\n from transformers import pipeline\r\n File \"/home/miniconda3/lib/python3.9/site-packages/transformers/__init__.py\", line 26, in <module>\r\n from . import dependency_versions_check\r\n File \"/home/miniconda3/lib/python3.9/site-packages/transformers/dependency_versions_check.py\", line 41, in <module>\r\n require_version_core(deps[pkg])\r\n File \"/home/miniconda3/lib/python3.9/site-packages/transformers/utils/versions.py\", line 123, in require_version_core\r\n return require_version(requirement, hint)\r\n File \"/home/miniconda3/lib/python3.9/site-packages/transformers/utils/versions.py\", line 117, in require_version\r\n _compare_versions(op, got_ver, want_ver, requirement, pkg, hint)\r\n File \"/home/miniconda3/lib/python3.9/site-packages/transformers/utils/versions.py\", line 45, in _compare_versions\r\n raise ValueError(\r\nValueError: Unable to compare versions for numpy>=1.17: need=1.17 found=None. This is unusual. Consider reinstalling numpy.\r\n```",
"What version of Numpy are you using? Can you update that as well?\r\nYou can use:\r\n`conda update numpy`",
"```\r\n$ conda list numpy\r\n# packages in environment at /home/miniconda3:\r\n#\r\n# Name Version Build Channel\r\nnumpy 1.24.3 py39h14f4228_0 defaults\r\nnumpy-base 1.24.3 py39h31eccc5_0 defaults\r\nnumpy-quaternion 2022.4.1 pypi_0 pypi\r\n```\r\nas you can see, numpy is also latest ",
"Same thing ...\r\n```\r\n % ipython\r\nPython 3.9.16 (main, Mar 8 2023, 04:29:44) \r\nType 'copyright', 'credits' or 'license' for more information\r\nIPython 8.12.0 -- An enhanced Interactive Python. Type '?' for help.\r\n\r\nIn [1]: import transformers\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\nCell In[1], line 1\r\n----> 1 import transformers\r\n\r\nFile ~/anaconda3/envs/pysr/lib/python3.9/site-packages/transformers/__init__.py:26\r\n 23 from typing import TYPE_CHECKING\r\n 25 # Check the dependencies satisfy the minimal versions required.\r\n---> 26 from . import dependency_versions_check\r\n 27 from .utils import (\r\n 28 OptionalDependencyNotAvailable,\r\n 29 _LazyModule,\r\n (...)\r\n 42 logging,\r\n 43 )\r\n 46 logger = logging.get_logger(__name__) # pylint: disable=invalid-name\r\n\r\nFile ~/anaconda3/envs/pysr/lib/python3.9/site-packages/transformers/dependency_versions_check.py:41\r\n 38 if not is_tokenizers_available():\r\n 39 continue # not required, check version only if installed\r\n---> 41 require_version_core(deps[pkg])\r\n 42 else:\r\n 43 raise ValueError(f\"can't find {pkg} in {deps.keys()}, check dependency_versions_table.py\")\r\n\r\nFile ~/anaconda3/envs/pysr/lib/python3.9/site-packages/transformers/utils/versions.py:123, in require_version_core(requirement)\r\n 121 \"\"\"require_version wrapper which emits a core-specific hint on failure\"\"\"\r\n 122 hint = \"Try: pip install transformers -U or pip install -e '.[dev]' if you're working with git main\"\r\n--> 123 return require_version(requirement, hint)\r\n\r\nFile ~/anaconda3/envs/pysr/lib/python3.9/site-packages/transformers/utils/versions.py:117, in require_version(requirement, hint)\r\n 115 if want_ver is not None:\r\n 116 for op, want_ver in wanted.items():\r\n--> 117 _compare_versions(op, got_ver, want_ver, requirement, pkg, hint)\r\n\r\nFile ~/anaconda3/envs/pysr/lib/python3.9/site-packages/transformers/utils/versions.py:45, in _compare_versions(op, got_ver, want_ver, requirement, pkg, hint)\r\n 43 def _compare_versions(op, got_ver, want_ver, requirement, pkg, hint):\r\n 44 if got_ver is None or want_ver is None:\r\n---> 45 raise ValueError(\r\n 46 f\"Unable to compare versions for {requirement}: need={want_ver} found={got_ver}. This is unusual. Consider\"\r\n 47 f\" reinstalling {pkg}.\"\r\n 48 )\r\n 49 if not ops[op](version.parse(got_ver), version.parse(want_ver)):\r\n 50 raise ImportError(\r\n 51 f\"{requirement} is required for a normal functioning of this module, but found {pkg}=={got_ver}.{hint}\"\r\n 52 )\r\n\r\nValueError: Unable to compare versions for numpy>=1.17: need=1.17 found=None. This is unusual. Consider reinstalling numpy.\r\n\r\nIn [2]: quit\r\n(pysr) davidlaxer@bluediamond julia % conda list numpy \r\n# packages in environment at /Users/davidlaxer/anaconda3/envs/pysr:\r\n#\r\n# Name Version Build Channel\r\nnumpy 1.24.3 py39he696674_0 \r\nnumpy-base 1.24.3 py39h9cd3388_0 \r\n```",
"If there is no way to solve this problem, I would change a env. I will close this issue in few days",
"```\r\n % ipython \r\nPython 3.9.16 (main, Mar 8 2023, 04:29:44) \r\nType 'copyright', 'credits' or 'license' for more information\r\nIPython 8.12.0 -- An enhanced Interactive Python. Type '?' for help.\r\n\r\nIn [1]: import importlib_metadata\r\n\r\nIn [2]: import numpy\r\n\r\nIn [3]: importlib_metadata.version(numpy)\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\nCell In[3], line 1\r\n----> 1 importlib_metadata.version(numpy)\r\n\r\nFile ~/anaconda3/envs/pysr/lib/python3.9/site-packages/importlib_metadata/__init__.py:832, in version(distribution_name)\r\n 825 def version(distribution_name):\r\n 826 \"\"\"Get the version string for the named package.\r\n 827 \r\n 828 :param distribution_name: The name of the distribution package to query.\r\n 829 :return: The version string for the package as defined in the package's\r\n 830 \"Version\" metadata key.\r\n 831 \"\"\"\r\n--> 832 return distribution(distribution_name).version\r\n\r\nFile ~/anaconda3/envs/pysr/lib/python3.9/site-packages/importlib_metadata/__init__.py:805, in distribution(distribution_name)\r\n 799 def distribution(distribution_name):\r\n 800 \"\"\"Get the ``Distribution`` instance for the named package.\r\n 801 \r\n 802 :param distribution_name: The name of the distribution package as a string.\r\n 803 :return: A ``Distribution`` instance (or subclass thereof).\r\n 804 \"\"\"\r\n--> 805 return Distribution.from_name(distribution_name)\r\n\r\nFile ~/anaconda3/envs/pysr/lib/python3.9/site-packages/importlib_metadata/__init__.py:381, in Distribution.from_name(cls, name)\r\n 379 raise ValueError(\"A distribution name is required.\")\r\n 380 try:\r\n--> 381 return next(cls.discover(name=name))\r\n 382 except StopIteration:\r\n 383 raise PackageNotFoundError(name)\r\n\r\nFile ~/anaconda3/envs/pysr/lib/python3.9/site-packages/importlib_metadata/__init__.py:400, in <genexpr>(.0)\r\n 397 raise ValueError(\"cannot accept context and kwargs\")\r\n 398 context = context or DistributionFinder.Context(**kwargs)\r\n 399 return itertools.chain.from_iterable(\r\n--> 400 resolver(context) for resolver in cls._discover_resolvers()\r\n 401 )\r\n\r\nFile ~/anaconda3/envs/pysr/lib/python3.9/site-packages/importlib_metadata/__init__.py:731, in MetadataPathFinder.find_distributions(self, context)\r\n 722 def find_distributions(self, context=DistributionFinder.Context()):\r\n 723 \"\"\"\r\n 724 Find distributions.\r\n 725 \r\n (...)\r\n 729 of directories ``context.path``.\r\n 730 \"\"\"\r\n--> 731 found = self._search_paths(context.name, context.path)\r\n 732 return map(PathDistribution, found)\r\n\r\nFile ~/anaconda3/envs/pysr/lib/python3.9/site-packages/importlib_metadata/__init__.py:737, in MetadataPathFinder._search_paths(cls, name, paths)\r\n 734 @classmethod\r\n 735 def _search_paths(cls, name, paths):\r\n 736 \"\"\"Find metadata directories in paths heuristically.\"\"\"\r\n--> 737 prepared = Prepared(name)\r\n 738 return itertools.chain.from_iterable(\r\n 739 path.search(prepared) for path in map(FastPath, paths)\r\n 740 )\r\n\r\nFile ~/anaconda3/envs/pysr/lib/python3.9/site-packages/importlib_metadata/__init__.py:692, in Prepared.__init__(self, name)\r\n 690 if name is None:\r\n 691 return\r\n--> 692 self.normalized = self.normalize(name)\r\n 693 self.legacy_normalized = self.legacy_normalize(name)\r\n\r\nFile ~/anaconda3/envs/pysr/lib/python3.9/site-packages/importlib_metadata/__init__.py:700, in Prepared.normalize(name)\r\n 695 @staticmethod\r\n 696 def normalize(name):\r\n 697 \"\"\"\r\n 698 PEP 503 normalization plus dashes as underscores.\r\n 699 \"\"\"\r\n--> 700 return re.sub(r\"[-_.]+\", \"-\", name).lower().replace('-', '_')\r\n\r\nFile ~/anaconda3/envs/pysr/lib/python3.9/re.py:210, in sub(pattern, repl, string, count, flags)\r\n 203 def sub(pattern, repl, string, count=0, flags=0):\r\n 204 \"\"\"Return the string obtained by replacing the leftmost\r\n 205 non-overlapping occurrences of the pattern in string by the\r\n 206 replacement repl. repl can be either a string or a callable;\r\n 207 if a string, backslash escapes in it are processed. If it is\r\n 208 a callable, it's passed the Match object and must return\r\n 209 a replacement string to be used.\"\"\"\r\n--> 210 return _compile(pattern, flags).sub(repl, string, count)\r\n\r\nTypeError: expected string or bytes-like object\r\n```",
"I created a new Conda virtual environment and installed the requisite packages ... no issue.\r\nSo, something was wrong in original Conda virtual environment (which I removed).",
"I got around it by modifying `transformers/utils/versions.py` :\r\n\r\nline 102, from:\r\n`got_ver = importlib.metadata.version(pkg)`\r\n\r\nto:\r\n```\r\ngot_ver = importlib.metadata.version(pkg)\r\n if got_ver is None:\r\n import pkg_resources\r\n got_ver = pkg_resources.get_distribution(pkg).version\r\n```\r\n\r\nFor some reason, `importlib.metadata.version(\"numpy\")` returned None, but pkg_resources works",
"> I got around it by modifying `transformers/utils/versions.py` :\r\n> \r\n> line 102, from: `got_ver = importlib.metadata.version(pkg)`\r\n> \r\n> to:\r\n> \r\n> ```\r\n> got_ver = importlib.metadata.version(pkg)\r\n> if got_ver is None:\r\n> import pkg_resources\r\n> got_ver = pkg_resources.get_distribution(pkg).version\r\n> ```\r\n> \r\n> For some reason, `importlib.metadata.version(\"numpy\")` returned None, but pkg_resources works\r\n\r\n:+1: **Awesome**",
"This stackoverflow issue fixed my problem: https://stackoverflow.com/questions/74817641/openai-whisper-cannot-import-numpy",
"This issue should be reopened. `importlib.metadata.version(\"numpy\")` returns None despite numpy is correctly installed and working. The way transformers detecting **runtime** package versions seems not correct. It's checking *installed* metadata, not actual runtime version.",
"> So, something was wrong in original Conda virtual environment (which I removed).\r\n\r\nBTW I got this workaround working, clean up and remove all the (possibly outdated) metadata from site-packages:\r\n\r\n```\r\nsite_packages=$(python -c \"from distutils.sysconfig import get_python_lib; print(get_python_lib())\")\r\nrm -rf \"$site_packages\"/numpy\r\npip install --upgrade --force-reinstall numpy\r\n```\r\n\r\nBTW just reinstalling via `pip -I --force-reinstall install numpy` did not work."
] | 1,682 | 1,699 | 1,683 |
NONE
| null |
### System Info
Ubuntu 18.04.6
transformers version : 4.18.0
pytorch version : 2.0.0
numpy version : 1.24.3
conda env
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import pipeline
text_classifier = pipeline('text-classification', model='distilbert-base-uncased-finetuned-sst-2-english')
text = "This movie is good!"
result = text_classifier(text)
print(result)
**when I run a code using transformers, there will be an error:**
Traceback (most recent call last):
File "/home/hyx/hhq/hugging_face/test.py", line 1, in <module>
from transformers import pipeline
File "/home/miniconda3/lib/python3.9/site-packages/transformers/__init__.py", line 26, in <module>
from . import dependency_versions_check
File "/home/miniconda3/lib/python3.9/site-packages/transformers/dependency_versions_check.py", line 41, in <module>
require_version_core(deps[pkg])
File "/home/miniconda3/lib/python3.9/site-packages/transformers/utils/versions.py", line 123, in require_version_core
return require_version(requirement, hint)
File "/home/miniconda3/lib/python3.9/site-packages/transformers/utils/versions.py", line 117, in require_version
_compare_versions(op, got_ver, want_ver, requirement, pkg, hint)
File "/home/miniconda3/lib/python3.9/site-packages/transformers/utils/versions.py", line 45, in _compare_versions
raise ValueError(
ValueError: Unable to compare versions for numpy>=1.17: need=1.17 found=None. This is unusual. Consider reinstalling numpy.
### Expected behavior
I have tried to reinstall numpy ,transformers, but it's not work
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23076/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23076/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23075
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23075/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23075/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23075/events
|
https://github.com/huggingface/transformers/pull/23075
| 1,689,924,961 |
PR_kwDOCUB6oc5PdKeT
| 23,075 |
Fix check for backword_pos
|
{
"login": "winglian",
"id": 381258,
"node_id": "MDQ6VXNlcjM4MTI1OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/381258?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/winglian",
"html_url": "https://github.com/winglian",
"followers_url": "https://api.github.com/users/winglian/followers",
"following_url": "https://api.github.com/users/winglian/following{/other_user}",
"gists_url": "https://api.github.com/users/winglian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/winglian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/winglian/subscriptions",
"organizations_url": "https://api.github.com/users/winglian/orgs",
"repos_url": "https://api.github.com/users/winglian/repos",
"events_url": "https://api.github.com/users/winglian/events{/privacy}",
"received_events_url": "https://api.github.com/users/winglian/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @pacman100 "
] | 1,682 | 1,683 | 1,683 |
CONTRIBUTOR
| null |
# What does this PR do?
This fixes what I believe the original intention of this line should have been. @raghavanone ??
original PR here: https://github.com/huggingface/transformers/pull/21237
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23075/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23075/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23075",
"html_url": "https://github.com/huggingface/transformers/pull/23075",
"diff_url": "https://github.com/huggingface/transformers/pull/23075.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23075.patch",
"merged_at": 1683034362000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23074
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23074/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23074/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23074/events
|
https://github.com/huggingface/transformers/issues/23074
| 1,689,884,690 |
I_kwDOCUB6oc5kuZgS
| 23,074 |
How to use BartEncoder and BartDecoder
|
{
"login": "ryliu68",
"id": 23697666,
"node_id": "MDQ6VXNlcjIzNjk3NjY2",
"avatar_url": "https://avatars.githubusercontent.com/u/23697666?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ryliu68",
"html_url": "https://github.com/ryliu68",
"followers_url": "https://api.github.com/users/ryliu68/followers",
"following_url": "https://api.github.com/users/ryliu68/following{/other_user}",
"gists_url": "https://api.github.com/users/ryliu68/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ryliu68/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ryliu68/subscriptions",
"organizations_url": "https://api.github.com/users/ryliu68/orgs",
"repos_url": "https://api.github.com/users/ryliu68/repos",
"events_url": "https://api.github.com/users/ryliu68/events{/privacy}",
"received_events_url": "https://api.github.com/users/ryliu68/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@LRY0111 Even though BART is a Sequence to Sequence Transformer trained with a Denoising Autoencoder objective, it has been trained to reconstruct the original text. So I don't think you can use BART to represent \"A\" as \"z\" using the base model.\r\nYou can find more information in the model docs [here](https://huggingface.co/docs/transformers/model_doc/bart).\r\n\r\nYou can use the raw model for text infilling. However, the model is mostly meant to be fine-tuned on a supervised dataset. See the [model hub](https://huggingface.co/models?search=bart) to look for fine-tuned versions on a task that interests you.",
"@awinml I appreciate your help. I mean that I want to get the latent representation z of a sentence A (by Encoder) and make some changes on z to formulate z' ; finally, reverse this process by Decoder to reconstruct the A'.\r\n\r\nSo I need to know how to do this A-->z-->A with Bart Encoder and Decoder; I want some example code here. Thank you very much.",
"cc @gante ",
"Hey @LRY0111 👋 \r\n\r\nYou were close to the correct usage, but there is a detail you missed in your solution :) The decoder must be used in an auto-regressive fashion, which we conveniently implemented in our `.generate()` method (see [this blog post](https://huggingface.co/blog/how-to-generate)). See the snippet below for an example.\r\n\r\n```py\r\nfrom transformers import BartForConditionalGeneration, BartTokenizer\r\n\r\nmodel = BartForConditionalGeneration.from_pretrained('facebook/bart-base')\r\ntokenizer = BartTokenizer.from_pretrained('facebook/bart-base')\r\n\r\ninputs = tokenizer([\"This is a test. Hello world\"], return_tensors=\"pt\")\r\n\r\nencoder = model.model.encoder\r\n\r\n# z.last_hidden_state has the encoded output. If you manipulate it, you may need to \r\n# rebuild the `BaseModelOutput` data class, which `.generate()` expects\r\nz = encoder(input_ids=inputs[\"input_ids\"])\r\n\r\nA = model.generate(encoder_outputs=z, max_new_tokens=20)\r\nprint(tokenizer.decode(A[0], skip_special_tokens=True))\r\n```",
"Well noted with thanks. I’ll try the solution you provided. Thank you again.\r\n\r\nBest regards,\r\n\r\n从 Windows 版邮件<https://go.microsoft.com/fwlink/?LinkId=550986>发送\r\n\r\n发件人: Joao ***@***.***>\r\n发送时间: 2023年5月3日 21:42\r\n收件人: ***@***.***>\r\n抄送: ***@***.***>; ***@***.***>\r\n主题: Re: [huggingface/transformers] How to use BartEncoder and BartDecoder (Issue #23074)\r\n\r\n\r\nHey @LRY0111<https://github.com/LRY0111> 👋\r\n\r\nYou were close to the correct usage, but there is a detail you missed in your solution :) The decoder must be used in an auto-regressive fashion, which we conveniently implemented in our .generate() method (see this blog post<https://huggingface.co/blog/how-to-generate>). See the snippet below for an example.\r\n\r\nfrom transformers import BartForConditionalGeneration, BartTokenizer\r\n\r\n\r\n\r\nmodel = BartForConditionalGeneration.from_pretrained('facebook/bart-base')\r\n\r\ntokenizer = BartTokenizer.from_pretrained('facebook/bart-base')\r\n\r\n\r\n\r\ninputs = tokenizer([\"This is a test. Hello world\"], return_tensors=\"pt\")\r\n\r\n\r\n\r\nencoder = model.model.encoder\r\n\r\ndecoder = model.model.decoder\r\n\r\n\r\n\r\n# z.last_hidden_state has the encoded output. If you manipulate it, you may need to\r\n\r\n# rebuild the `BaseModelOutput` data class, which `.generate()` expects\r\n\r\nz = encoder(input_ids=inputs[\"input_ids\"])\r\n\r\n\r\n\r\nA = model.generate(encoder_outputs=z, max_new_tokens=20)\r\n\r\nprint(tokenizer.decode(A[0], skip_special_tokens=True))\r\n\r\n—\r\nReply to this email directly, view it on GitHub<https://github.com/huggingface/transformers/issues/23074#issuecomment-1533053566>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AFUZSAVRRMEA7V27JD4OVQTXEJODRANCNFSM6AAAAAAXQ3WM6M>.\r\nYou are receiving this because you were mentioned.Message ID: ***@***.***>\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,682 | 1,686 | 1,686 |
NONE
| null |
### System Info
I'm a CVer, I want to use Bart as Auto-Encoder in CV task.
Here I have a question, how to use BartEncoder to encode **A** to **z**, and then how to decode **z** to "A"?
I need an example code; please help me, than you very much.
Maybe like this two?
one:
```
model = BartForConditionalGeneration.from_pretrained('facebook/bart-base')
tokenizer = BartTokenizer.from_pretrained('facebook/bart-base')
inputs = A
encoder = model.model.encoder
decoder = model.model.decoder
z= encoder(input_ids = inputs["input_ids"])
A= decoder(z)
```
it will meet following error: ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds.
two:
```
from transformers.models.bart.modeling_bart import BartEncoder
from transformers.models.bart.modeling_bart import BartDecoder
inputs = A
z= BartEncoder(input_ids = inputs["input_ids"])
A= BartDecoder(z)
So, what should I do? I didn't find a document for this; please help me with this, thank you again.
@ArthurZucker @gante @Narsil
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
as I provided code.
### Expected behavior
Write an example code or document
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23074/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23074/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23073
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23073/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23073/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23073/events
|
https://github.com/huggingface/transformers/pull/23073
| 1,689,883,472 |
PR_kwDOCUB6oc5PdCei
| 23,073 |
Added type hints for `Graphormer` pytorch version
|
{
"login": "dewasahu2003",
"id": 95997298,
"node_id": "U_kgDOBbjNcg",
"avatar_url": "https://avatars.githubusercontent.com/u/95997298?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dewasahu2003",
"html_url": "https://github.com/dewasahu2003",
"followers_url": "https://api.github.com/users/dewasahu2003/followers",
"following_url": "https://api.github.com/users/dewasahu2003/following{/other_user}",
"gists_url": "https://api.github.com/users/dewasahu2003/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dewasahu2003/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dewasahu2003/subscriptions",
"organizations_url": "https://api.github.com/users/dewasahu2003/orgs",
"repos_url": "https://api.github.com/users/dewasahu2003/repos",
"events_url": "https://api.github.com/users/dewasahu2003/events{/privacy}",
"received_events_url": "https://api.github.com/users/dewasahu2003/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This looks pretty good! Is there a reason to use `Union[torch.Tensor, torch.LongTensor]` instead of just `torch.LongTensor`?",
"@Rocketknight1 Hi 👋\r\n- `Union[torch.Tensor, torch.LongTensor]` is used because the file had a lot of `nn.embedding` instances which expects either IntTensor or LongTensor\r\n- so to avoid any confusion 😕 i used that\r\n- [nn.embedding docs](https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html#torch.nn.Embedding)\r\n\r\n\r\n\r\n*if still changes are required i would be happy to make it*🙂\r\n",
"Hi @dewasahu2003, I think in most cases we just annotate those types as `LongTensor`! Your version is probably more correct, but for simplicity just `LongTensor` is fine, since that's what people usually use.",
"@Rocketknight1 Hi 👋 \r\n- if LongTensor is preferred then i would make changes along\r\n- that would help the code to be 🤒 bloat free\r\n",
"Yep, I think replacing with LongTensor is slightly better, and does make the code a bit cleaner too.",
"Sure ",
"Done. Thanks for the PR, we really appreciate it!"
] | 1,682 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
@Rocketknight1 👋
- I added type hint for `graphormer` pytorch
- checked formatting with black and ruff
if some checks on ci/cd do not, please do comment and correct
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23073/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23073/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23073",
"html_url": "https://github.com/huggingface/transformers/pull/23073",
"diff_url": "https://github.com/huggingface/transformers/pull/23073.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23073.patch",
"merged_at": 1684171662000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23072
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23072/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23072/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23072/events
|
https://github.com/huggingface/transformers/issues/23072
| 1,689,666,073 |
I_kwDOCUB6oc5ktkIZ
| 23,072 |
Register a custom tokenizer with AutoTokenizer
|
{
"login": "rahular",
"id": 1104544,
"node_id": "MDQ6VXNlcjExMDQ1NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1104544?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rahular",
"html_url": "https://github.com/rahular",
"followers_url": "https://api.github.com/users/rahular/followers",
"following_url": "https://api.github.com/users/rahular/following{/other_user}",
"gists_url": "https://api.github.com/users/rahular/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rahular/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rahular/subscriptions",
"organizations_url": "https://api.github.com/users/rahular/orgs",
"repos_url": "https://api.github.com/users/rahular/repos",
"events_url": "https://api.github.com/users/rahular/events{/privacy}",
"received_events_url": "https://api.github.com/users/rahular/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"You can have a look at the [documentation here](https://huggingface.co/docs/transformers/custom_models) but this is already supported. Just do `CustomTokenizer.register_for_auto_class()` like for the models.",
"Duh! I was doing this for the models but didn't make the connection to the tokenizer. Thanks @sgugger!\r\n\r\nFor someone looking for the complete answer in the future:\r\n```\r\nCustomTokenizer.register_for_auto_class(\"AutoTokenizer\")\r\n```",
"@rahular Hi . I am doing the same thing but I am unable to do this can you help me by giving the code of how to integrate the tokenizer with huggingface?",
"Same \r\nhttps://github.com/huggingface/transformers/issues/23072#issuecomment-1819028435",
"could you explain what you are unable to do with a reproducer and a repo on the hub? 🤗 ",
"I've figure it out. I'm trying to use InternLM2 from AutoClass.\r\nFor \r\ntransformers 4.34.1\r\ntokenizers 0.14.1\r\n`CustomTokenizer.register_for_auto_class(\"AutoTokenizer\") ` doesn't work. \r\nHowever, the following works:\r\n`\r\nAutoConfig.register(\"internlm2\", InternLM2Config)\r\nAutoTokenizer.register(InternLM2Config, InternLM2Tokenizer, InternLM2TokenizerFast)\r\nAutoModel.register(InternLM2Config, InternLM2Model)\r\nAutoModelForCausalLM.register(InternLM2Config, InternLM2ForCausalLM)\r\n`\r\n\r\nThen I'm able to finish\r\n`\r\ntokenizer = AutoTokenizer.from_pretrained(\"path\")\r\n`",
"Great that you were able to debug. Do you want to open a PR to add more details about registering the tokenzier to help others in the community? 🤗 "
] | 1,682 | 1,707 | 1,682 |
CONTRIBUTOR
| null |
### System Info
(Possible duplicate: #10256)
I have written a custom tokenizer that builds on top of `BertTokenizer` (returns one extra list of ids that will later be embedded in a custom model). I have pushed it to Hub as well. Now, how can I allow others to use it? The code for the tokenizer is uploaded to Hub along with the code for the model (they are in the same file), but since I cannot register the tokenizer with `AutoTokenizer` like I can do for models (`CustomModel.register_for_auto_class("AutoModel")`), others cannot load this tokenizer, and hence use the model.
Is there a workaround for this?
Version: 4.27.4
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The code for both the tokenizer and model can be found here: https://huggingface.co/mcgill-babylm/bert_ds10M_np512_nh2_nl2_hs128_postags_ungrouped/blob/main/pos_bert.py
I am able to load the model with no problems since I push it after registering it as follows
```
BertForMaskedLMWithPOSEmb.register_for_auto_class("AutoModel")
BertForMaskedLMWithPOSEmb.register_for_auto_class("AutoModelForMaskedLM")
```
### Expected behavior
I should be able to register custom tokenizers with `AutoTokenizer` (which might be a new feature request) or work around it somehow to allow other users to use a custom tokenizer.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23072/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23072/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23071
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23071/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23071/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23071/events
|
https://github.com/huggingface/transformers/pull/23071
| 1,689,651,874 |
PR_kwDOCUB6oc5PcXIA
| 23,071 |
added type hints for blip_text pytorch model
|
{
"login": "iamarunbrahma",
"id": 6504730,
"node_id": "MDQ6VXNlcjY1MDQ3MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6504730?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iamarunbrahma",
"html_url": "https://github.com/iamarunbrahma",
"followers_url": "https://api.github.com/users/iamarunbrahma/followers",
"following_url": "https://api.github.com/users/iamarunbrahma/following{/other_user}",
"gists_url": "https://api.github.com/users/iamarunbrahma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iamarunbrahma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iamarunbrahma/subscriptions",
"organizations_url": "https://api.github.com/users/iamarunbrahma/orgs",
"repos_url": "https://api.github.com/users/iamarunbrahma/repos",
"events_url": "https://api.github.com/users/iamarunbrahma/events{/privacy}",
"received_events_url": "https://api.github.com/users/iamarunbrahma/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,682 | 1,683 | 1,683 |
CONTRIBUTOR
| null |
# What does this PR do?
Added type hints for blip_text pytorch model as tasked in https://github.com/huggingface/transformers/issues/16059
@Rocketknight1 Could you review this?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23071/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23071/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23071",
"html_url": "https://github.com/huggingface/transformers/pull/23071",
"diff_url": "https://github.com/huggingface/transformers/pull/23071.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23071.patch",
"merged_at": 1683030151000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23070
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23070/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23070/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23070/events
|
https://github.com/huggingface/transformers/issues/23070
| 1,689,631,597 |
I_kwDOCUB6oc5ktbtt
| 23,070 |
KeyError: 'eval_loss' (LLaMA finetuning)
|
{
"login": "coen22",
"id": 6968825,
"node_id": "MDQ6VXNlcjY5Njg4MjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6968825?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/coen22",
"html_url": "https://github.com/coen22",
"followers_url": "https://api.github.com/users/coen22/followers",
"following_url": "https://api.github.com/users/coen22/following{/other_user}",
"gists_url": "https://api.github.com/users/coen22/gists{/gist_id}",
"starred_url": "https://api.github.com/users/coen22/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/coen22/subscriptions",
"organizations_url": "https://api.github.com/users/coen22/orgs",
"repos_url": "https://api.github.com/users/coen22/repos",
"events_url": "https://api.github.com/users/coen22/events{/privacy}",
"received_events_url": "https://api.github.com/users/coen22/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Not certain, but this may be related to #22885.",
"> Not certain, but this may be related to #22885.\r\n\r\nThanks for the reference, however the proposed workaround (`label_names=[\"labels\"]`) did not work.\r\n\r\n",
"Please post a reproducer we can execute.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Encountered the same issue! \r\nTrying to train a distilbert model on squad with the following code:\r\n```\r\nCUDA_VISIBLE_DEVICES=0 python3 ./transformers/examples/pytorch/question-answering/run_qa.py \\\r\n--model_name_or_path distilbert-base-cased \\\r\n--run_name distilbert-base-cased-squad-008 \\\r\n--dataset_name squad_v2 \\\r\n--do_train \\\r\n--do_eval \\\r\n--version_2_with_negative \\\r\n--learning_rate 3e-4 \\\r\n--lr_scheduler_type cosine \\\r\n--warmup_ratio 0.1 \\\r\n--num_train_epochs 8 \\\r\n--max_seq_length 512 \\\r\n--doc_stride 128 \\\r\n--evaluation_strategy steps \\\r\n--save_strategy steps \\\r\n--save_total_limit 3 \\\r\n--output_dir ./distilbert-base-cased-squad-008 \\\r\n--per_device_eval_batch_size 48 \\\r\n--per_device_train_batch_size 48 \\\r\n--push_to_hub true \\\r\n--hub_strategy end \\\r\n--hub_token ... \\\r\n--hub_private_repo true \\\r\n--load_best_model_at_end true \r\n\r\n```\r\n"
] | 1,682 | 1,696 | 1,686 |
NONE
| null |
### System Info
- `transformers` version: 4.28.1
- Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.13.3
- Safetensors version: 0.3.0
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes RTX 3090
- Using distributed or parallel set-up in script?: No
I'm running into this issue whenever I use a DatasetDict as the evaluation dataset
```
traceback (most recent call last):
File "/mnt/e/alpaca-lora/finetune.py", line 304, in <module>
fire.Fire(train)
File "/home/coen/.local/lib/python3.10/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/home/coen/.local/lib/python3.10/site-packages/fire/core.py", line 475, in _Fire
component, remaining_args = _CallAndUpdateTrace(
File "/home/coen/.local/lib/python3.10/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "/mnt/e/alpaca-lora/finetune.py", line 294, in train
trainer.train(resume_from_checkpoint=resume_from_checkpoint)
File "/home/coen/.local/lib/python3.10/site-packages/transformers/trainer.py", line 1662, in train
return inner_training_loop(
File "/home/coen/.local/lib/python3.10/site-packages/transformers/trainer.py", line 2006, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/home/coen/.local/lib/python3.10/site-packages/transformers/trainer.py", line 2291, in _maybe_log_save_evaluate
self._save_checkpoint(model, trial, metrics=metrics)
File "/home/coen/.local/lib/python3.10/site-packages/transformers/trainer.py", line 2394, in _save_checkpoint
metric_value = metrics[metric_to_check]
KeyError: 'eval_loss'
Traceback (most recent call last):
File "/mnt/e/alpaca-lora/finetune.py", line 304, in <module>
fire.Fire(train)
File "/home/coen/.local/lib/python3.10/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/home/coen/.local/lib/python3.10/site-packages/fire/core.py", line 475, in _Fire
component, remaining_args = _CallAndUpdateTrace(
File "/home/coen/.local/lib/python3.10/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "/mnt/e/alpaca-lora/finetune.py", line 294, in train
trainer.train(resume_from_checkpoint=resume_from_checkpoint)
File "/home/coen/.local/lib/python3.10/site-packages/transformers/trainer.py", line 1662, in train
return inner_training_loop(
File "/home/coen/.local/lib/python3.10/site-packages/transformers/trainer.py", line 2006, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/home/coen/.local/lib/python3.10/site-packages/transformers/trainer.py", line 2291, in _maybe_log_save_evaluate
self._save_checkpoint(model, trial, metrics=metrics)
File "/home/coen/.local/lib/python3.10/site-packages/transformers/trainer.py", line 2394, in _save_checkpoint
metric_value = metrics[metric_to_check]
KeyError: 'eval_loss'
```
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Download [Alpaca-Lora](https://github.com/tloen/alpaca-lora) from the repository
2. Modify the code
```
if val_data_path is not None:
train_data = (
# data.select(range(10)).shuffle().map(generate_and_tokenize_prompt)
data.shuffle().map(generate_and_tokenize_prompt)
)
val_data: DatasetDict = load_from_disk(val_data_path)
val_data = (
val_data.map(generate_and_tokenize_prompt)
)
elif val_set_size > 0:
train_val = data.train_test_split(
test_size=val_set_size, shuffle=True, seed=42
)
train_data = (
train_val["train"].shuffle().map(generate_and_tokenize_prompt)
)
val_data: Dataset = (
train_val["test"].shuffle().map(generate_and_tokenize_prompt)
)
else:
train_data = data["train"].shuffle().map(generate_and_tokenize_prompt)
val_data: None = None
if not ddp and torch.cuda.device_count() > 1:
# keeps Trainer from trying its own DataParallelism when more than 1 gpu is available
model.is_parallelizable = True
model.model_parallel = True
# def compute_metrics(eval_preds):
# metric = evaluate.load("glue", "mrpc")
# logits, labels = eval_preds
# predictions = np.argmax(logits, axis=-1)
# return metric.compute(predictions=predictions, references=labels)
trainer = transformers.Trainer(
model=model,
train_dataset=train_data,
eval_dataset=val_data,
args=transformers.TrainingArguments(
per_device_train_batch_size=micro_batch_size,
gradient_accumulation_steps=gradient_accumulation_steps,
warmup_steps=100,
num_train_epochs=num_epochs,
learning_rate=learning_rate,
fp16=True,
logging_steps=10,
optim="adamw_torch",
evaluation_strategy="steps" if val_set_size > 0 else "no",
save_strategy="steps",
eval_steps=200 if val_set_size > 0 else None,
save_steps=200,
output_dir=output_dir,
save_total_limit=3,
load_best_model_at_end=True if val_set_size > 0 else False,
ddp_find_unused_parameters=False if ddp else None,
group_by_length=group_by_length,
report_to="wandb" if use_wandb else None,
run_name=wandb_run_name if use_wandb else None,
),
data_collator=transformers.DataCollatorForSeq2Seq(
tokenizer, pad_to_multiple_of=8, return_tensors="pt", padding=True
),
# compute_metrics=compute_metrics
)
model.config.use_cache = False
```
### Expected behavior
Training as normal with seperate evaluation on each Dataset in the dict
[EDIT] the error occurs right after having validated every set. I can see that it starts training again.
Am I doing something wrong?
I really see anything wrong with the evaluation datasets that I'm using
They work when it's just one big evaluation Dataset object
If you need more info, please let me know :)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23070/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23070/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23069
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23069/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23069/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23069/events
|
https://github.com/huggingface/transformers/issues/23069
| 1,689,608,828 |
I_kwDOCUB6oc5ktWJ8
| 23,069 |
convert_data2vec_audio_original_pytorch_checkpoint_to_pytorch.py works for data2vec 1.0 checkpoint but not data2vec 2.0
|
{
"login": "alanrice",
"id": 11898,
"node_id": "MDQ6VXNlcjExODk4",
"avatar_url": "https://avatars.githubusercontent.com/u/11898?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alanrice",
"html_url": "https://github.com/alanrice",
"followers_url": "https://api.github.com/users/alanrice/followers",
"following_url": "https://api.github.com/users/alanrice/following{/other_user}",
"gists_url": "https://api.github.com/users/alanrice/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alanrice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alanrice/subscriptions",
"organizations_url": "https://api.github.com/users/alanrice/orgs",
"repos_url": "https://api.github.com/users/alanrice/repos",
"events_url": "https://api.github.com/users/alanrice/events{/privacy}",
"received_events_url": "https://api.github.com/users/alanrice/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Getting beyond the key error by either commenting [lines 212-213](https://github.com/huggingface/transformers/blob/849367ccf741d8c58aa88ccfe1d52d8636eaf2b7/src/transformers/models/data2vec/convert_data2vec_audio_original_pytorch_checkpoint_to_pytorch.py#L212C50-L213 ) or changing the keys to `modality_encoders.AUDIO.decoder.proj.weight` and `modality_encoders.AUDIO.decoder.proj.bias`, results in a `Could not infer model type from {cfg}` error.\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/notebooks/scripts/convert_data2vec_audio_original_pytorch_checkpoint_to_pytorch.py\", line 287, in <module>\r\n convert_wav2vec2_checkpoint(\r\n File \"/usr/local/lib/python3.9/dist-packages/torch/autograd/grad_mode.py\", line 27, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/notebooks/scripts/convert_data2vec_audio_original_pytorch_checkpoint_to_pytorch.py\", line 227, in convert_wav2vec2_checkpoint\r\n model = load_data2vec(converted_ckpt)\r\n File \"/notebooks/scripts/convert_data2vec_audio_original_pytorch_checkpoint_to_pytorch.py\", line 224, in load_data2vec\r\n model, _, _ = fairseq.checkpoint_utils.load_model_ensemble_and_task([path])\r\n File \"/notebooks/fairseq/fairseq/checkpoint_utils.py\", line 484, in load_model_ensemble_and_task\r\n model = task.build_model(cfg.model, from_checkpoint=True)\r\n File \"/notebooks/fairseq/fairseq/tasks/audio_pretraining.py\", line 178, in build_model\r\n model = super().build_model(model_cfg, from_checkpoint)\r\n File \"/notebooks/fairseq/fairseq/tasks/fairseq_task.py\", line 355, in build_model\r\n model = models.build_model(cfg, self, from_checkpoint)\r\n File \"/notebooks/fairseq/fairseq/models/__init__.py\", line 101, in build_model\r\n f\"Could not infer model type from {cfg}. \"\r\nKeyError: \"'_name'\"\r\n``` ",
"It looks like the architecture of data2vec 2.0 is different from 1.0, so supporting this would require changing the modeling code for Data2Vec in Transformers or adding a new Data2Vec2 model. Patching the existing conversion script likely won't be sufficient.",
"Please note that the conversion script are provided as an indication from the model contributor on how they converted the original checkpoint to the Hugging Face format. They are not maintained and not expected to work on other checkpoints.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Leaving this as closed since the issue requires a new conversion script and new modelling code for data2vec2 (rather than being an issue with the existing data2vec code). Feel free to open a feature request if this is something you'd like to see @alanrice!"
] | 1,682 | 1,686 | 1,686 |
NONE
| null |
### System Info
- `transformers` version: 4.21.3
- Platform: Linux-5.4.0-122-generic-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): 2.9.2 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.3 (gpu)
- Jax version: 0.4.1
- JaxLib version: 0.4.1
### Who can help?
@sanchit-gandhi @sgugger
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
- Download [data2vec 1.0 Large (No fine-tuning) .pt](https://dl.fbaipublicfiles.com/fairseq/data2vec/vox_pretrained.pt) from [fairseq/data2vec](https://github.com/facebookresearch/fairseq/tree/main/examples/data2vec)
- Download config.json from [facebook/data2vec-audio-large](https://huggingface.co/facebook/data2vec-audio-large/blob/main/config.json)
- Run [convert_data2vec_audio_original_pytorch_checkpoint_to_pytorch.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/data2vec/convert_data2vec_audio_original_pytorch_checkpoint_to_pytorch.py)
- pytorch_model.bin output
```shell
python scripts/convert_data2vec_audio_original_pytorch_checkpoint_to_pytorch.py \
--pytorch_dump_folder_path converted \
--checkpoint_path vox_pretrained.pt \
--config_path config.json \
--not_finetuned
```
<details><summary>Output</summary>
<p>
2023-04-29 15:56:57 | INFO | fairseq.tasks.text_to_speech | Please install tensorboardX: pip install tensorboardX
loading configuration file config.json
Model config Data2VecAudioConfig {
"_name": "data2vec_audio",
"activation_dropout": 0.1,
"adapter_kernel_size": 3,
"adapter_stride": 2,
"add_adapter": false,
"apply_spec_augment": true,
"architectures": [
"Data2VecAudioModel"
],
"attention_dropout": 0.1,
"bos_token_id": 1,
"classifier_proj_size": 256,
"codevector_dim": 768,
"contrastive_logits_temperature": 0.1,
"conv_bias": false,
"conv_dim": [
512,
512,
512,
512,
512,
512,
512
],
"conv_kernel": [
10,
3,
3,
3,
3,
2,
2
],
"conv_pos_kernel_size": 19,
"conv_stride": [
5,
2,
2,
2,
2,
2,
2
],
"ctc_loss_reduction": "sum",
"ctc_zero_infinity": false,
"diversity_loss_weight": 0.1,
"do_stable_layer_norm": true,
"eos_token_id": 2,
"feat_extract_activation": "gelu",
"feat_extract_dropout": 0.0,
"feat_extract_norm": "layer",
"feat_proj_dropout": 0.1,
"feat_quantizer_dropout": 0.0,
"final_dropout": 0.1,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout": 0.1,
"hidden_dropout_prob": 0.1,
"hidden_size": 1024,
"initializer_range": 0.02,
"intermediate_size": 4096,
"layer_norm_eps": 1e-05,
"layerdrop": 0.0,
"mask_feature_length": 10,
"mask_feature_min_masks": 0,
"mask_feature_prob": 0.0,
"mask_time_length": 10,
"mask_time_min_masks": 2,
"mask_time_prob": 0.05,
"model_type": "data2vec-audio",
"num_adapter_layers": 3,
"num_attention_heads": 16,
"num_codevector_groups": 2,
"num_codevectors_per_group": 320,
"num_conv_pos_embedding_groups": 16,
"num_conv_pos_embeddings": 5,
"num_feat_extract_layers": 7,
"num_hidden_layers": 24,
"num_negatives": 100,
"output_hidden_size": 1024,
"pad_token_id": 0,
"proj_codevector_dim": 768,
"tdnn_dilation": [
1,
2,
3,
1,
1
],
"tdnn_dim": [
512,
512,
512,
512,
1500
],
"tdnn_kernel": [
5,
3,
3,
1,
1
],
"torch_dtype": "float32",
"transformers_version": "4.21.3",
"use_weighted_layer_sum": false,
"vocab_size": 32,
"xvector_output_dim": 512
}
2023-04-29 15:58:39 | WARNING | datasets.builder | Reusing dataset librispeech_asr_dummy (/root/.cache/huggingface/datasets/patrickvonplaten___librispeech_asr_dummy/clean/2.1.0/f2c70a4d03ab4410954901bde48c54b85ca1b7f9bf7d616e7e2a72b5ee6ddbfc)
It is strongly recommended to pass the ``sampling_rate`` argument to this function. Failing to do so can result in silent errors that might be hard to debug.
torch.Size([4, 666, 1024]) torch.Size([4, 666, 1024])
max_absolute_diff = 8.707307279109955e-05
Do both models output the same tensors? 🔥
Configuration saved in converted/config.json
Model weights saved in converted/pytorch_model.bin
Feature extractor saved in converted/preprocessor_config.json
</p>
</details>
- Download [data2vec 2.0 Large (No fine-tuning) .pt](https://dl.fbaipublicfiles.com/fairseq/data2vec2/large_vox.pt) from [fairseq/data2vec](https://github.com/facebookresearch/fairseq/tree/main/examples/data2vec)
- Download config.json from [facebook/data2vec-audio-large](https://huggingface.co/facebook/data2vec-audio-large/blob/main/config.json)
- Run [convert_data2vec_audio_original_pytorch_checkpoint_to_pytorch.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/data2vec/convert_data2vec_audio_original_pytorch_checkpoint_to_pytorch.py)
- KeyError: 'final_proj.0.weight'
```shell
python scripts/convert_data2vec_audio_original_pytorch_checkpoint_to_pytorch.py \
--pytorch_dump_folder_path converted \
--checkpoint_path large_vox.pt \
--config_path config.json \
--not_finetuned
```
<details><summary>Output</summary>
<p>
2023-04-29 15:59:58 | INFO | fairseq.tasks.text_to_speech | Please install tensorboardX: pip install tensorboardX
loading configuration file config.json
Model config Data2VecAudioConfig {
"_name": "data2vec_audio",
"activation_dropout": 0.1,
"adapter_kernel_size": 3,
"adapter_stride": 2,
"add_adapter": false,
"apply_spec_augment": true,
"architectures": [
"Data2VecAudioModel"
],
"attention_dropout": 0.1,
"bos_token_id": 1,
"classifier_proj_size": 256,
"codevector_dim": 768,
"contrastive_logits_temperature": 0.1,
"conv_bias": false,
"conv_dim": [
512,
512,
512,
512,
512,
512,
512
],
"conv_kernel": [
10,
3,
3,
3,
3,
2,
2
],
"conv_pos_kernel_size": 19,
"conv_stride": [
5,
2,
2,
2,
2,
2,
2
],
"ctc_loss_reduction": "sum",
"ctc_zero_infinity": false,
"diversity_loss_weight": 0.1,
"do_stable_layer_norm": true,
"eos_token_id": 2,
"feat_extract_activation": "gelu",
"feat_extract_dropout": 0.0,
"feat_extract_norm": "layer",
"feat_proj_dropout": 0.1,
"feat_quantizer_dropout": 0.0,
"final_dropout": 0.1,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout": 0.1,
"hidden_dropout_prob": 0.1,
"hidden_size": 1024,
"initializer_range": 0.02,
"intermediate_size": 4096,
"layer_norm_eps": 1e-05,
"layerdrop": 0.0,
"mask_feature_length": 10,
"mask_feature_min_masks": 0,
"mask_feature_prob": 0.0,
"mask_time_length": 10,
"mask_time_min_masks": 2,
"mask_time_prob": 0.05,
"model_type": "data2vec-audio",
"num_adapter_layers": 3,
"num_attention_heads": 16,
"num_codevector_groups": 2,
"num_codevectors_per_group": 320,
"num_conv_pos_embedding_groups": 16,
"num_conv_pos_embeddings": 5,
"num_feat_extract_layers": 7,
"num_hidden_layers": 24,
"num_negatives": 100,
"output_hidden_size": 1024,
"pad_token_id": 0,
"proj_codevector_dim": 768,
"tdnn_dilation": [
1,
2,
3,
1,
1
],
"tdnn_dim": [
512,
512,
512,
512,
1500
],
"tdnn_kernel": [
5,
3,
3,
1,
1
],
"torch_dtype": "float32",
"transformers_version": "4.21.3",
"use_weighted_layer_sum": false,
"vocab_size": 32,
"xvector_output_dim": 512
}
Traceback (most recent call last):
File "/notebooks/scripts/convert_data2vec_audio_original_pytorch_checkpoint_to_pytorch.py", line 287, in <module>
convert_wav2vec2_checkpoint(
File "/usr/local/lib/python3.9/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/notebooks/scripts/convert_data2vec_audio_original_pytorch_checkpoint_to_pytorch.py", line 213, in convert_wav2vec2_checkpoint
state_dict["model"]["final_proj.weight"] = state_dict["model"].pop("final_proj.0.weight")
KeyError: 'final_proj.0.weight'
</p>
</details>
### Expected behavior
Expected behavior is that model weights are saved in pytorch_model.bin for both data2vec 1.0 and 2.0 checkpoints.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23069/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23069/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23068
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23068/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23068/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23068/events
|
https://github.com/huggingface/transformers/pull/23068
| 1,689,604,580 |
PR_kwDOCUB6oc5PcOOn
| 23,068 |
🌐 [i18n-KO] Translated `tasks/zero_shot_object_detection.mdx` to Korean
|
{
"login": "HanNayeoniee",
"id": 33839093,
"node_id": "MDQ6VXNlcjMzODM5MDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/33839093?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HanNayeoniee",
"html_url": "https://github.com/HanNayeoniee",
"followers_url": "https://api.github.com/users/HanNayeoniee/followers",
"following_url": "https://api.github.com/users/HanNayeoniee/following{/other_user}",
"gists_url": "https://api.github.com/users/HanNayeoniee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HanNayeoniee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HanNayeoniee/subscriptions",
"organizations_url": "https://api.github.com/users/HanNayeoniee/orgs",
"repos_url": "https://api.github.com/users/HanNayeoniee/repos",
"events_url": "https://api.github.com/users/HanNayeoniee/events{/privacy}",
"received_events_url": "https://api.github.com/users/HanNayeoniee/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Can you solve the conflicts so we can merge this PR?",
"> Can you solve the conflicts so we can merge this PR?\r\n\r\ntoctree file of this branch causes a conflict because it's different from the new version. \r\nAs shown in [[docs] Doc TOC updates](https://github.com/huggingface/transformers/pull/23049)\r\nLet me fix this after I update korean toctree first!\r\n",
"Closed in favor of #23430 "
] | 1,682 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
<!-- PR의 제목은 "🌐 [i18n-KO] Translated `<your_file>.mdx` to Korean" 으로 부탁드립니다 -->
# What does this PR do?
Translated the `tasks/zero_shot_object_detection.mdx` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
<!-- 메인 이슈에 기록이 남아요! 가짜연구소 리포를 사용해 연습하실때는 제거해주시면 감사하겠습니다! :smile: -->
## Before reviewing
- [x] Check for missing / redundant translations (번역 누락/중복 검사)
- [x] Grammar Check (맞춤법 검사)
- [x] Review or Add new terms to glossary (용어 확인 및 추가)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
<!-- 1. 위 체크가 모두 완료된 뒤에만 가짜연구소 팀원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. 가짜연구소 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
@sgugger, @ArthurZucker, @eunseojo May you please review this PR?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23068/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23068/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23068",
"html_url": "https://github.com/huggingface/transformers/pull/23068",
"diff_url": "https://github.com/huggingface/transformers/pull/23068.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23068.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23067
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23067/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23067/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23067/events
|
https://github.com/huggingface/transformers/pull/23067
| 1,689,603,118 |
PR_kwDOCUB6oc5PcN9Z
| 23,067 |
added type hints in graphormer
|
{
"login": "dewasahu2003",
"id": 95997298,
"node_id": "U_kgDOBbjNcg",
"avatar_url": "https://avatars.githubusercontent.com/u/95997298?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dewasahu2003",
"html_url": "https://github.com/dewasahu2003",
"followers_url": "https://api.github.com/users/dewasahu2003/followers",
"following_url": "https://api.github.com/users/dewasahu2003/following{/other_user}",
"gists_url": "https://api.github.com/users/dewasahu2003/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dewasahu2003/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dewasahu2003/subscriptions",
"organizations_url": "https://api.github.com/users/dewasahu2003/orgs",
"repos_url": "https://api.github.com/users/dewasahu2003/repos",
"events_url": "https://api.github.com/users/dewasahu2003/events{/privacy}",
"received_events_url": "https://api.github.com/users/dewasahu2003/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23067). All of your documentation changes will be reflected on that endpoint."
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
@Rocketknight1 I added type hints for `Graphormers` for pytorch as described in [issue #16059 ](https://github.com/huggingface/transformers/issues/16059)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23067/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23067/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23067",
"html_url": "https://github.com/huggingface/transformers/pull/23067",
"diff_url": "https://github.com/huggingface/transformers/pull/23067.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23067.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23066
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23066/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23066/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23066/events
|
https://github.com/huggingface/transformers/pull/23066
| 1,689,581,890 |
PR_kwDOCUB6oc5PcKA_
| 23,066 |
Update setup.py
|
{
"login": "ice-black",
"id": 55835551,
"node_id": "MDQ6VXNlcjU1ODM1NTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/55835551?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ice-black",
"html_url": "https://github.com/ice-black",
"followers_url": "https://api.github.com/users/ice-black/followers",
"following_url": "https://api.github.com/users/ice-black/following{/other_user}",
"gists_url": "https://api.github.com/users/ice-black/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ice-black/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ice-black/subscriptions",
"organizations_url": "https://api.github.com/users/ice-black/orgs",
"repos_url": "https://api.github.com/users/ice-black/repos",
"events_url": "https://api.github.com/users/ice-black/events{/privacy}",
"received_events_url": "https://api.github.com/users/ice-black/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23066). All of your documentation changes will be reflected on that endpoint."
] | 1,682 | 1,682 | 1,682 |
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23066/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23066/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23066",
"html_url": "https://github.com/huggingface/transformers/pull/23066",
"diff_url": "https://github.com/huggingface/transformers/pull/23066.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23066.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23065
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23065/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23065/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23065/events
|
https://github.com/huggingface/transformers/pull/23065
| 1,689,573,480 |
PR_kwDOCUB6oc5PcIcE
| 23,065 |
🌐 [i18n-KO] Translated `tasks/zero_shot_image_classification.mdx` to Korean
|
{
"login": "HanNayeoniee",
"id": 33839093,
"node_id": "MDQ6VXNlcjMzODM5MDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/33839093?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HanNayeoniee",
"html_url": "https://github.com/HanNayeoniee",
"followers_url": "https://api.github.com/users/HanNayeoniee/followers",
"following_url": "https://api.github.com/users/HanNayeoniee/following{/other_user}",
"gists_url": "https://api.github.com/users/HanNayeoniee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HanNayeoniee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HanNayeoniee/subscriptions",
"organizations_url": "https://api.github.com/users/HanNayeoniee/orgs",
"repos_url": "https://api.github.com/users/HanNayeoniee/repos",
"events_url": "https://api.github.com/users/HanNayeoniee/events{/privacy}",
"received_events_url": "https://api.github.com/users/HanNayeoniee/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"LGTM! :-) "
] | 1,682 | 1,683 | 1,682 |
CONTRIBUTOR
| null |
<!-- PR의 제목은 "🌐 [i18n-KO] Translated `<your_file>.mdx` to Korean" 으로 부탁드립니다 -->
# What does this PR do?
Translated the `tasks/zero_shot_image_classification.mdx` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
<!-- 메인 이슈에 기록이 남아요! 가짜연구소 리포를 사용해 연습하실때는 제거해주시면 감사하겠습니다! :smile: -->
## Before reviewing
- [x] Check for missing / redundant translations (번역 누락/중복 검사)
- [x] Grammar Check (맞춤법 검사)
- [x] Review or Add new terms to glossary (용어 확인 및 추가)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
<!-- 1. 위 체크가 모두 완료된 뒤에만 가짜연구소 팀원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. 가짜연구소 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
@sgugger, @ArthurZucker, @eunseojo May you please review this PR?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23065/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23065/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23065",
"html_url": "https://github.com/huggingface/transformers/pull/23065",
"diff_url": "https://github.com/huggingface/transformers/pull/23065.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23065.patch",
"merged_at": 1682986317000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23064
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23064/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23064/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23064/events
|
https://github.com/huggingface/transformers/pull/23064
| 1,689,566,422 |
PR_kwDOCUB6oc5PcHHL
| 23,064 |
🌐 [i18n-KO] docs: ko: Translate `multiple_choice.mdx`
|
{
"login": "gabrielwithappy",
"id": 102908949,
"node_id": "U_kgDOBiJEFQ",
"avatar_url": "https://avatars.githubusercontent.com/u/102908949?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gabrielwithappy",
"html_url": "https://github.com/gabrielwithappy",
"followers_url": "https://api.github.com/users/gabrielwithappy/followers",
"following_url": "https://api.github.com/users/gabrielwithappy/following{/other_user}",
"gists_url": "https://api.github.com/users/gabrielwithappy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gabrielwithappy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gabrielwithappy/subscriptions",
"organizations_url": "https://api.github.com/users/gabrielwithappy/orgs",
"repos_url": "https://api.github.com/users/gabrielwithappy/repos",
"events_url": "https://api.github.com/users/gabrielwithappy/events{/privacy}",
"received_events_url": "https://api.github.com/users/gabrielwithappy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd",
"@sgugger, @ArthurZucker, @eunseojo \r\nMay you please review this PR?"
] | 1,682 | 1,683 | 1,683 |
CONTRIBUTOR
| null |
<!-- PR의 제목은 "🌐 [i18n-KO] Translated `<your_file>.mdx` to Korean" 으로 부탁드립니다 -->
# What does this PR do?
Translated the `multiple_choice.mdx` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
<!-- 메인 이슈에 기록이 남아요! 가짜연구소 리포를 사용해 연습하실때는 제거해주시면 감사하겠습니다! :smile: -->
## Before reviewing
- [x] Check for missing / redundant translations (번역 누락/중복 검사)
- [x] Grammar Check (맞춤법 검사)
- [x] Review or Add new terms to glossary (용어 확인 및 추가)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
<!-- 1. 위 체크가 모두 완료된 뒤에만 가짜연구소 팀원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
<!-- Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
@sgugger, @ArthurZucker, @eunseojo May you please review this PR?
<!-- 2. 가짜연구소 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
<!-- @sgugger, @ArthurZucker, @eunseojo May you please review this PR? -->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23064/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23064/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23064",
"html_url": "https://github.com/huggingface/transformers/pull/23064",
"diff_url": "https://github.com/huggingface/transformers/pull/23064.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23064.patch",
"merged_at": 1683301016000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23063
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23063/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23063/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23063/events
|
https://github.com/huggingface/transformers/pull/23063
| 1,689,547,949 |
PR_kwDOCUB6oc5PcDng
| 23,063 |
Flamingo Implementation
|
{
"login": "king159",
"id": 35168738,
"node_id": "MDQ6VXNlcjM1MTY4NzM4",
"avatar_url": "https://avatars.githubusercontent.com/u/35168738?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/king159",
"html_url": "https://github.com/king159",
"followers_url": "https://api.github.com/users/king159/followers",
"following_url": "https://api.github.com/users/king159/following{/other_user}",
"gists_url": "https://api.github.com/users/king159/gists{/gist_id}",
"starred_url": "https://api.github.com/users/king159/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/king159/subscriptions",
"organizations_url": "https://api.github.com/users/king159/orgs",
"repos_url": "https://api.github.com/users/king159/repos",
"events_url": "https://api.github.com/users/king159/events{/privacy}",
"received_events_url": "https://api.github.com/users/king159/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Respect! Openflamingo needs be built with huggingface transformers for more efficient training and inference.\r\n\r\nWe have already adapted it in our [Otter model](https://github.com/Luodian/Otter) (an instruction tuned model based on flamingo). We uploaded a converted openflamingo-9b weights at [luodian/openflamingo-9b-hf](https://huggingface.co/luodian/openflamingo-9b-hf).\r\n\r\nThe model could be loaded via \r\n```python\r\nmodel = transformers.FlamingoForConditionalGeneration.from_pretrained(\"luodian/openflamingo-9b-hf\")\r\n```",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23063). All of your documentation changes will be reflected on that endpoint.",
"cc @amyeroberts and @younesbelkada ",
"Awesome work! Let us know when the PR is ready for review!"
] | 1,682 | 1,690 | 1,690 |
NONE
| null |
# What does this PR do?
Implementation of Flamingo models (https://arxiv.org/abs/2204.14198). Model weights trained by Open Flamingo team can be downloaded [here](https://huggingface.co/openflamingo/OpenFlamingo-9B). Weight conversion script is included.
Weights conversion can be run via:
``` python
python src/transformers/models/flamingo/converting_flamingo_to_hf.py \
--old_ckpt_path /path/to/open/flamingo/weights \
--new_hf_path /output/path
```
Models can then be loaded via:
``` python
model = transformers.FlamingoForConditionalGeneration.from_pretrained("/output/path")
```
Example:
``` python
import requests
import torch
import transformers
from PIL import Image
tokenizer = model.text_tokenizer
image_processor = transformers.CLIPImageProcessor()
demo_image_one = Image.open(
requests.get(
"http://images.cocodataset.org/val2017/000000039769.jpg", stream=True
).raw
)
demo_image_two = Image.open(
requests.get(
"http://images.cocodataset.org/test-stuff2017/000000028137.jpg", stream=True
).raw
)
query_image = Image.open(
requests.get(
"http://images.cocodataset.org/test-stuff2017/000000028352.jpg", stream=True
).raw
)
vision_x = (
image_processor.preprocess(
[demo_image_one, demo_image_two, query_image], return_tensors="pt"
)["pixel_values"]
.unsqueeze(1)
.unsqueeze(0)
)
model.text_tokenizer.padding_side = "left"
lang_x = tokenizer(
["<image>An image of two cats.<|endofchunk|><image>An image of a bathroom sink.<|endofchunk|><image>An image of"],
return_tensors="pt",
)
generated_text = model.generate(
vision_x=vision_x,
lang_x=lang_x["input_ids"],
attention_mask=lang_x["attention_mask"],
max_new_tokens=20,
num_beams=3,
)
print("Generated text: ", model.text_tokenizer.decode(generated_text[0]))
```
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23063/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23063/timeline
| null | true |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23063",
"html_url": "https://github.com/huggingface/transformers/pull/23063",
"diff_url": "https://github.com/huggingface/transformers/pull/23063.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23063.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23062
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23062/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23062/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23062/events
|
https://github.com/huggingface/transformers/issues/23062
| 1,689,474,580 |
I_kwDOCUB6oc5ks1YU
| 23,062 |
[docs] broken link in `torchscript.mdx`
|
{
"login": "sim-so",
"id": 96299403,
"node_id": "U_kgDOBb1piw",
"avatar_url": "https://avatars.githubusercontent.com/u/96299403?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sim-so",
"html_url": "https://github.com/sim-so",
"followers_url": "https://api.github.com/users/sim-so/followers",
"following_url": "https://api.github.com/users/sim-so/following{/other_user}",
"gists_url": "https://api.github.com/users/sim-so/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sim-so/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sim-so/subscriptions",
"organizations_url": "https://api.github.com/users/sim-so/orgs",
"repos_url": "https://api.github.com/users/sim-so/repos",
"events_url": "https://api.github.com/users/sim-so/events{/privacy}",
"received_events_url": "https://api.github.com/users/sim-so/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Thanks for catching this! Would you like to open a PR with your fix? 🤗",
"Hello, @stevhliu !\r\nI opened PR #23060 for translating the document to Korean as well as fixing the issue.\r\nPlease let me know if it would be better to open another PR for the fix separately."
] | 1,682 | 1,683 | 1,683 |
CONTRIBUTOR
| null |
### Description
Broken `serialization#using-torchscript-in-python` should be `torchscript#using-torchscript-in-python`, in line number 201 of `torchscript.mdx`.
### Document / language
`torchscript.mdx` / en, kr
### Suggestion
As is:
```
### Converting a model for AWS Neuron
Convert a model for AWS NEURON using the same code from [Using TorchScript in
Python](serialization#using-torchscript-in-python) to trace a `BertModel`. Import the
`torch.neuron` framework extension to access the components of the Neuron SDK through a
Python API:
```
To be:
```
### Converting a model for AWS Neuron
Convert a model for AWS NEURON using the same code from [Using TorchScript in
Python](torchscript#using-torchscript-in-python) to trace a `BertModel`. Import the
`torch.neuron` framework extension to access the components of the Neuron SDK through a
Python API:
```
Please let me know if I missed something in guideilnes.
Thank you in advance for your attention to it!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23062/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23062/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23061
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23061/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23061/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23061/events
|
https://github.com/huggingface/transformers/pull/23061
| 1,689,438,075 |
PR_kwDOCUB6oc5PbtWn
| 23,061 |
num_noise_spans should be <= num_items #22246
|
{
"login": "alexcpn",
"id": 1157251,
"node_id": "MDQ6VXNlcjExNTcyNTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1157251?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexcpn",
"html_url": "https://github.com/alexcpn",
"followers_url": "https://api.github.com/users/alexcpn/followers",
"following_url": "https://api.github.com/users/alexcpn/following{/other_user}",
"gists_url": "https://api.github.com/users/alexcpn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexcpn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexcpn/subscriptions",
"organizations_url": "https://api.github.com/users/alexcpn/orgs",
"repos_url": "https://api.github.com/users/alexcpn/repos",
"events_url": "https://api.github.com/users/alexcpn/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexcpn/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23061). All of your documentation changes will be reflected on that endpoint."
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
Clone of https://github.com/huggingface/transformers/pull/22938
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23061/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23061/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23061",
"html_url": "https://github.com/huggingface/transformers/pull/23061",
"diff_url": "https://github.com/huggingface/transformers/pull/23061.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23061.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23060
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23060/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23060/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23060/events
|
https://github.com/huggingface/transformers/pull/23060
| 1,689,287,866 |
PR_kwDOCUB6oc5PbOB8
| 23,060 |
🌐 [i18n-KO] Translated `torchscript.mdx` to Korean
|
{
"login": "sim-so",
"id": 96299403,
"node_id": "U_kgDOBb1piw",
"avatar_url": "https://avatars.githubusercontent.com/u/96299403?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sim-so",
"html_url": "https://github.com/sim-so",
"followers_url": "https://api.github.com/users/sim-so/followers",
"following_url": "https://api.github.com/users/sim-so/following{/other_user}",
"gists_url": "https://api.github.com/users/sim-so/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sim-so/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sim-so/subscriptions",
"organizations_url": "https://api.github.com/users/sim-so/orgs",
"repos_url": "https://api.github.com/users/sim-so/repos",
"events_url": "https://api.github.com/users/sim-so/events{/privacy}",
"received_events_url": "https://api.github.com/users/sim-so/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Could you review this PR? 😃 \r\n@sgugger, @ArthurZucker, @eunseojo"
] | 1,682 | 1,685 | 1,683 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Translated the `torchscript.mdx` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
Fixes https://github.com/huggingface/transformers/issues/23062
## Before reviewing
- [x] Check for missing / redundant translations (번역 누락/중복 검사)
- [x] Grammar Check (맞춤법 검사)
- [x] Review or Add new terms to glossary (용어 확인 및 추가)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
@sgugger, @ArthurZucker, @eunseojo
May you please review this PR?
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23060/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23060/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23060",
"html_url": "https://github.com/huggingface/transformers/pull/23060",
"diff_url": "https://github.com/huggingface/transformers/pull/23060.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23060.patch",
"merged_at": 1683034080000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23059
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23059/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23059/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23059/events
|
https://github.com/huggingface/transformers/pull/23059
| 1,689,287,824 |
PR_kwDOCUB6oc5PbOBb
| 23,059 |
GPTNeoXForQuestionAnswering
|
{
"login": "peter-sk",
"id": 6168908,
"node_id": "MDQ6VXNlcjYxNjg5MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/6168908?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/peter-sk",
"html_url": "https://github.com/peter-sk",
"followers_url": "https://api.github.com/users/peter-sk/followers",
"following_url": "https://api.github.com/users/peter-sk/following{/other_user}",
"gists_url": "https://api.github.com/users/peter-sk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/peter-sk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/peter-sk/subscriptions",
"organizations_url": "https://api.github.com/users/peter-sk/orgs",
"repos_url": "https://api.github.com/users/peter-sk/repos",
"events_url": "https://api.github.com/users/peter-sk/events{/privacy}",
"received_events_url": "https://api.github.com/users/peter-sk/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger @younesbelkada as Arthur is on holdiays 👍 ",
"@younesbelkada @amyeroberts this one is ready for review :-)"
] | 1,682 | 1,683 | 1,683 |
CONTRIBUTOR
| null |
# What does this PR do?
Adds GPTNeoXForQuestionAnswering.
Includes #23030 and #23057.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
@ArthurZucker
@younesbelkada
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23059/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23059/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23059",
"html_url": "https://github.com/huggingface/transformers/pull/23059",
"diff_url": "https://github.com/huggingface/transformers/pull/23059.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23059.patch",
"merged_at": 1683209715000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23058
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23058/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23058/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23058/events
|
https://github.com/huggingface/transformers/issues/23058
| 1,689,284,485 |
I_kwDOCUB6oc5ksG-F
| 23,058 |
OneFormer processor does not return correctly formatted class_labels tensors
|
{
"login": "rbavery",
"id": 22258697,
"node_id": "MDQ6VXNlcjIyMjU4Njk3",
"avatar_url": "https://avatars.githubusercontent.com/u/22258697?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rbavery",
"html_url": "https://github.com/rbavery",
"followers_url": "https://api.github.com/users/rbavery/followers",
"following_url": "https://api.github.com/users/rbavery/following{/other_user}",
"gists_url": "https://api.github.com/users/rbavery/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rbavery/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rbavery/subscriptions",
"organizations_url": "https://api.github.com/users/rbavery/orgs",
"repos_url": "https://api.github.com/users/rbavery/repos",
"events_url": "https://api.github.com/users/rbavery/events{/privacy}",
"received_events_url": "https://api.github.com/users/rbavery/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi,\r\n\r\nI'd recommend taking a look at the MaskFormer/Mask2Former notebooks regarding fine-tuning on custom data: https://github.com/NielsRogge/Transformers-Tutorials/tree/master/MaskFormer. As the API of OneFormer is identical, except that it has one additional `task_inputs` input which you need to prepare as well.",
"Hi @NielsRogge @amyeroberts \r\nActually I was following your MaskFormer and Mask2Former tutorials \r\nand my task was to finetune on Semantic information and have an instance level prediction. Where Mask2Former was performing well.\r\nStill Problems I have mentioned on Closed Issue : https://github.com/huggingface/transformers/issues/21644\r\nMy main request was Can you also Write Training Tutorials for OneFormer in MaskFormer, Text input part is creating the problem and the segmentation_maps parameter. \r\n\r\n```\r\n 779 annotation_classes = label[\"classes\"]\r\n 780 annotation_masks = label[\"masks\"]\r\n--> 782 texts = [\"a semantic photo\"] * self.num_text\r\n 783 classes = []\r\n 784 masks = []\r\n\r\nTypeError: can't multiply sequence by non-int of type 'NoneType'\r\n\r\n```\r\n```\r\npreprocessor = OneFormerImageProcessor.from_pretrained(config.MODEL_PATH)\r\npreprocessor.num_text = 2\r\npreprocessor.num_classes = 2\r\npreprocessor.ignore_index=3 \r\npreprocessor.do_reduce_labels=False \r\npreprocessor.do_resize=False\r\npreprocessor.from_json_filedo_rescale=False\r\npreprocessor.do_normalize=True\r\npreprocessor.image_mean = config.MEAN\r\npreprocessor.image_std = config.STD\r\n```\r\nafter introducing num_text:\r\n```\r\n\r\n 972 num_class_obj[cls_name] = 0\r\n 974 for i, label in enumerate(annotations):\r\n--> 975 task = task_inputs[i]\r\n 976 if task == \"semantic\":\r\n 977 classes, masks, texts = self.get_semantic_annotations(label, num_class_obj)\r\n\r\nIndexError: list index out of range\r\n```\r\n\r\n```\r\ndef collate_fn(batch):\r\n inputs = list(zip(*batch))\r\n images = inputs[0]\r\n segmentation_maps = inputs[1]\r\n # this function pads the inputs to the same size,\r\n # and creates a pixel mask\r\n # actually padding isn't required here since we are cropping\r\n \r\n batch = preprocessor(\r\n images,\r\n task_inputs=[\"semantic\"],\r\n segmentation_maps=segmentation_maps,\r\n\r\n return_tensors=\"pt\",\r\n )\r\n \r\n return batch\r\n```",
"Any solution? I'm having the same or similar issues when changing the MaskFormerImageProcessor to OneFormerProcessor in the tutorial https://github.com/NielsRogge/Transformers-Tutorials/blob/master/MaskFormer/Fine-tuning/Fine_tuning_MaskFormer_on_a_panoptic_dataset.ipynb",
"@praeclarumjj3 As you implemented the model, could you confirm the desired structure for the inputs when training the model. Based on [this docstring](https://github.com/huggingface/transformers/blob/18ee1fe76295239335bf1528c744fe1cfba21cc8/src/transformers/models/oneformer/image_processing_oneformer.py#L948) and [this comment](https://github.com/huggingface/transformers/blob/18ee1fe76295239335bf1528c744fe1cfba21cc8/src/transformers/models/oneformer/image_processing_oneformer.py#L1049), this indicates to me that only a single image could be passed in at a time when training the model, as the number of labels for each image is different and hence the inputs cannot be batched. Is this correct? ",
"Hi @amyeroberts, Although the number of labels for each image differs, we can still batch the inputs (`image_inputs` and `tasks_inputs`) to OneFormer because they will have the same dimension. As for the labels, we pass those as a `List[Tensor]` to the loss function anyway, so there is no need to batch those as the `OneFormerProcessor` returns labels as a list already.\r\nhttps://github.com/huggingface/transformers/blob/18ee1fe76295239335bf1528c744fe1cfba21cc8/src/transformers/models/oneformer/modeling_oneformer.py#L475\r\nhttps://github.com/huggingface/transformers/blob/18ee1fe76295239335bf1528c744fe1cfba21cc8/src/transformers/models/oneformer/modeling_oneformer.py#L530 \r\n\r\nThe `_pad_images_to_max_in_batch` method takes the list `mask_labels` as input and outputs a batched tensor with each mask_label padded to a fixed shape. So, it's possible to batch inputs to OneFormer.\r\nhttps://github.com/huggingface/transformers/blob/18ee1fe76295239335bf1528c744fe1cfba21cc8/src/transformers/models/oneformer/modeling_oneformer.py#L411\r\n\r\nThe inputs to the models would be the same as those to a `MaskFormer model for Panoptic Segmentation` with an addition of a `task_inputs` field. So, if you have a list (of length `N`) of images: `input_images`, their mask form labels: `segmentation_maps`, you need to pass a list (of length 'N') of strings of the form [\"panoptic\", \"semantic\", .... {length `N`}] as the `task_inputs` arguments.\r\nhttps://github.com/huggingface/transformers/blob/04ab5605fbb4ef207b10bf2772d88c53fc242e83/src/transformers/models/oneformer/processing_oneformer.py#L78\r\n\r\nSo, the input preparation would look like:\r\n\r\n```python\r\n# processor is an object of the `OneFormerProcessor` class\r\nencoded_inputs = processor(images=input_images, task_inputs=task_inputs, segmentation_maps=segmentation_maps, return_tensors=\"pt\")\r\n```\r\n\r\nPlease let me know if you have any other questions or issues.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,682 | 1,697 | 1,697 |
NONE
| null |
### System Info
- `transformers` version: 4.29.0.dev0
- Platform: Linux-5.19.0-41-generic-x86_64-with-glibc2.35
- Python version: 3.10.10
- Huggingface_hub version: 0.14.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?): 2.11.1 (False)
- Flax version (CPU?/GPU?/TPU?): 0.5.3 (cpu)
- Jax version: 0.3.6
- JaxLib version: 0.3.5
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@amyeroberts
I'm trying to finetune `shi-labs/oneformer_ade20k_swin_tiny` on my own dataset. I've hit two problems, one with the docs and one with the actual library code for image processing.
1. There are no docs on fine-tuning or training the OneFormer model on this page: https://huggingface.co/docs/transformers/model_doc/oneformer . So I relied on investigating this test
https://github.com/huggingface/transformers/blob/main/tests/models/oneformer/test_modeling_oneformer.py#L364
2. The train test doesn't actually use the OneFormer Processor that is used in all of the inference examples in https://huggingface.co/docs/transformers/model_doc/oneformer
I think this is because the OneFormer Processor and the train test produce differently formatted class labels. In the train test, the class_labels are created from scratch here: https://github.com/huggingface/transformers/blob/main/tests/models/oneformer/test_modeling_oneformer.py#L106
When training, it's expected that class_labels is a Tensor shaped like [batch_size, num_classes], where a particular element in a batch would have [1,0,0,0] to represent the 0th class.
But the OneFormer processor returns a list of tensors with values greater than 1: [tensor([0, 3])]
This eventually leads to an error here https://github.com/huggingface/transformers/blob/v4.28.1/src/transformers/models/oneformer/modeling_oneformer.py#L306 where we get index out of bounds but it's a CUDA assert error.
I think this should be rectified by including a training example in the docs and changing the test and OneFormer Processor so that they work when training the model.
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. adapt test_training to create class_labels with OneFormerProcessor instead of from scratch
2. run the adapted test_training
I have a failing example at https://github.com/developmentseed/slickformer but it takes quite a bit of setup and the datasets in't public yet.
### Expected behavior
I'd expect OneFormer Processor to return a class_labels entry in the dict that has the same expected output as `class_labels` in `test_training`. To summarize, the class_labels need to be one hot encoded for training, but OneFormerProcessor isn't doing this.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23058/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23058/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23057
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23057/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23057/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23057/events
|
https://github.com/huggingface/transformers/pull/23057
| 1,689,277,696 |
PR_kwDOCUB6oc5PbL_f
| 23,057 |
GPTNeoForQuestionAnswering
|
{
"login": "peter-sk",
"id": 6168908,
"node_id": "MDQ6VXNlcjYxNjg5MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/6168908?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/peter-sk",
"html_url": "https://github.com/peter-sk",
"followers_url": "https://api.github.com/users/peter-sk/followers",
"following_url": "https://api.github.com/users/peter-sk/following{/other_user}",
"gists_url": "https://api.github.com/users/peter-sk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/peter-sk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/peter-sk/subscriptions",
"organizations_url": "https://api.github.com/users/peter-sk/orgs",
"repos_url": "https://api.github.com/users/peter-sk/repos",
"events_url": "https://api.github.com/users/peter-sk/events{/privacy}",
"received_events_url": "https://api.github.com/users/peter-sk/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"One more to go (for now).",
"@sgugger @younesbelkada - as Arthur is on holdiays 👍 ",
"@younesbelkada \r\nI merged with main to isolate the GPT Neo specific parts. Now everything seems to work fine.",
"rebased - let's see whether it helps",
"@younesbelkada merging helped - ready to merge 👍 "
] | 1,682 | 1,683 | 1,683 |
CONTRIBUTOR
| null |
# What does this PR do?
Adds QA support for GPT Neo.
Includes PR #23030.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
@ArthurZucker
@younesbelkada
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23057/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23057/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23057",
"html_url": "https://github.com/huggingface/transformers/pull/23057",
"diff_url": "https://github.com/huggingface/transformers/pull/23057.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23057.patch",
"merged_at": 1683143959000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23056
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23056/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23056/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23056/events
|
https://github.com/huggingface/transformers/pull/23056
| 1,689,259,947 |
PR_kwDOCUB6oc5PbIKm
| 23,056 |
fix random attention for pytorch's bigbird/pegasus_bigbird
|
{
"login": "Bearnardd",
"id": 43574448,
"node_id": "MDQ6VXNlcjQzNTc0NDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/43574448?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bearnardd",
"html_url": "https://github.com/Bearnardd",
"followers_url": "https://api.github.com/users/Bearnardd/followers",
"following_url": "https://api.github.com/users/Bearnardd/following{/other_user}",
"gists_url": "https://api.github.com/users/Bearnardd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bearnardd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bearnardd/subscriptions",
"organizations_url": "https://api.github.com/users/Bearnardd/orgs",
"repos_url": "https://api.github.com/users/Bearnardd/repos",
"events_url": "https://api.github.com/users/Bearnardd/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bearnardd/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @Bearnardd Thank you for the PR.\r\n\r\nI have one question: why `def _bigbird_block_rand_mask_with_head` is not modified for this pytorch BigBird file ..?",
"Hi @sanchit-gandhi! I have removed the static method as I think it is the best approach. ",
"> Hi @Bearnardd Thank you for the PR.\r\n> \r\n> I have one question: why `def _bigbird_block_rand_mask_with_head` is not modified for this pytorch BigBird file ..?\r\n\r\nThanks for the comment! To be honest I am not sure If I understand you correctly, since from what I can see this function is updated. Could you elaborate what exactly is missing?",
"> > Hi @Bearnardd Thank you for the PR.\r\n> > I have one question: why `def _bigbird_block_rand_mask_with_head` is not modified for this pytorch BigBird file ..?\r\n> \r\n> Thanks for the comment! To be honest I am not sure If I understand you correctly, since from what I can see this function is updated. Could you elaborate what exactly is missing?\r\n\r\nSorry, my bad. You are right :-)",
"cc @sgugger ",
"I have pushed the changes @sgugger :)"
] | 1,682 | 1,683 | 1,683 |
CONTRIBUTOR
| null |
Fixes # (issue)
https://github.com/huggingface/transformers/issues/23055
# What does this PR do?
Add control over usage of random attention of `BigBird` based on current mode (training/eval)
## Who can review?
@sanchit-gandhi @ydshieh
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23056/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23056/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23056",
"html_url": "https://github.com/huggingface/transformers/pull/23056",
"diff_url": "https://github.com/huggingface/transformers/pull/23056.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23056.patch",
"merged_at": 1683500104000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23055
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23055/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23055/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23055/events
|
https://github.com/huggingface/transformers/issues/23055
| 1,689,258,082 |
I_kwDOCUB6oc5ksAhi
| 23,055 |
Pytorch BigBird random attention
|
{
"login": "Bearnardd",
"id": 43574448,
"node_id": "MDQ6VXNlcjQzNTc0NDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/43574448?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bearnardd",
"html_url": "https://github.com/Bearnardd",
"followers_url": "https://api.github.com/users/Bearnardd/followers",
"following_url": "https://api.github.com/users/Bearnardd/following{/other_user}",
"gists_url": "https://api.github.com/users/Bearnardd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bearnardd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bearnardd/subscriptions",
"organizations_url": "https://api.github.com/users/Bearnardd/orgs",
"repos_url": "https://api.github.com/users/Bearnardd/repos",
"events_url": "https://api.github.com/users/Bearnardd/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bearnardd/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @sanchit-gandhi @ydshieh! I have opened [PR](https://github.com/huggingface/transformers/pull/23056) that fixes failing tests. I am wondering if the changes in the PR are okay (usage of random attention based on current mode) or do we want to have some more control over usage of random attention e.g. add `deterministic` argument for `__call__` of `BigBirdPreTrainedModel`. Secondly I was wondering what is the advantage of marking `_bigbird_block_rand_mask` as a `staticmethod` and then calling it with `self._bigbird_block_rand_mask` and passing it arguments from `self` like `self.max_seqlen` instead of treating it as a regular method. It looks kinda weird to me. Am I missing something?",
"Closed via https://github.com/huggingface/transformers/pull/23056."
] | 1,682 | 1,685 | 1,685 |
CONTRIBUTOR
| null |
### Reproduction
`Pytorch->Flax` and `Flax->Pytorch` equivalence tests were failing. At the moment they are skipped by https://github.com/huggingface/transformers/pull/23040
### Expected behavior
During working on https://github.com/huggingface/transformers/pull/21023 I have found out that there is a bug in pytorch's implementation of `BigBird`. Namely random attention is used no matter whether we are in training/eval mode. Corect behaviour is that during inference (eval) we should not introduce any randomness, hence we random attention should not be used.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23055/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23055/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23054
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23054/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23054/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23054/events
|
https://github.com/huggingface/transformers/issues/23054
| 1,689,237,766 |
I_kwDOCUB6oc5kr7kG
| 23,054 |
Pipeline(summarization) code example and documentation needs updating
|
{
"login": "TomBerton",
"id": 9907572,
"node_id": "MDQ6VXNlcjk5MDc1NzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/9907572?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TomBerton",
"html_url": "https://github.com/TomBerton",
"followers_url": "https://api.github.com/users/TomBerton/followers",
"following_url": "https://api.github.com/users/TomBerton/following{/other_user}",
"gists_url": "https://api.github.com/users/TomBerton/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TomBerton/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TomBerton/subscriptions",
"organizations_url": "https://api.github.com/users/TomBerton/orgs",
"repos_url": "https://api.github.com/users/TomBerton/repos",
"events_url": "https://api.github.com/users/TomBerton/events{/privacy}",
"received_events_url": "https://api.github.com/users/TomBerton/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"1. I beg to differ. Examples are meant to be simple to read, Having a real long form text just hinders readability imo.\r\n\r\n2.\r\n`min_length` and `max_length` are specified here: https://huggingface.co/docs/transformers/v4.28.1/en/main_classes/text_generation#transformers.GenerationMixin.greedy_search.max_length\r\n\r\n3. @sgugger What do you think here ? I agree examples shouldn't raise warnings, however I feel odd burning the name of a specific model into this example, since users are likely to not understand where to get that model id from.\r\n```\r\n# Fetch summarization models at https://huggingface.co/models?pipeline_tag=summarization&sort=downloads\r\nsummarizer = pipeline(model=\"philschmid/bart-large-cnn-samsum\")\r\n```\r\n\r\nSomething like that. That probably affects ALL examples within pipelines.",
"cc @gante The warning somehow needs to be addressed so that users of the `pipeline` function do not see it.",
"Hi @TomBerton 👋 \r\n\r\nThe warnings you described were updated in #23128, which should make the pipeline experience more pleasant and self-documenting 🤗 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,682 | 1,686 | 1,686 |
NONE
| null |
### System Info
Using Google Colab on Mac OS Ventura 13.2.1
Chrome Version 112.0.5615.137 (Official Build) (x86_64)
Using the install command.
`!pip install transformers`
Which downloads the following:
<img width="1264" alt="Screenshot 2023-04-28 at 5 53 25 PM" src="https://user-images.githubusercontent.com/9907572/235266551-f9c627f9-22db-41c0-89ba-1f9814d72fd5.png">
### Who can help?
@Narsil
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
In the documentation for the pipeline summarization [here](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.SummarizationPipeline) the example needs updating. Use the current example below:
`# use bart in pytorch`
`summarizer = pipeline("summarization")
summarizer("An apple a day, keeps the doctor away", min_length=5, max_length=20)
`
Produces the following output in Google Colab.
`Using a pipeline without specifying a model name and revision in production is not recommended.
Your max_length is set to 20, but you input_length is only 11. You might consider decreasing max_length manually, e.g. summarizer('...', max_length=5)
[{'summary_text': ' An apple a day, keeps the doctor away from your doctor away, says Dr.'}]`
The documentation doesn't state what `min_length=` and `max_length=` actually do and the output doesn't tell you either.
1. Is the `max_length` the maximum token length of the output or input?
2. Based on the output from running the code, does the input length affect the output?
Running this code:
`# use t5 in tf`
`summarizer = pipeline("summarization", model="t5-base", tokenizer="t5-base", framework="tf")
summarizer("An apple a day, keeps the doctor away", min_length=5, max_length=20)`
Produces the following output in Google Colab. .
`Your max_length is set to 20, but you input_length is only 13. You might consider decreasing max_length manually, e.g. summarizer('...', max_length=6)
/usr/local/lib/python3.10/dist-packages/transformers/generation/tf_utils.py:745: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation)
warnings.warn(
[{'summary_text': 'an apple a day, keeps the doctor away from the doctor .'}]`
### Expected behavior
1. Show the expected output by using longer text as the input.
2. Provide a clear explanation of what `min_length=` and `max_length=` actually do.
3. Avoid warnings when running example code from documentation or specifying a stable version to use.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23054/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23054/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23053
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23053/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23053/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23053/events
|
https://github.com/huggingface/transformers/issues/23053
| 1,689,230,660 |
I_kwDOCUB6oc5kr51E
| 23,053 |
Passing a str Enum to `from_pretrained` gives OSError
|
{
"login": "rsmith49",
"id": 17658617,
"node_id": "MDQ6VXNlcjE3NjU4NjE3",
"avatar_url": "https://avatars.githubusercontent.com/u/17658617?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rsmith49",
"html_url": "https://github.com/rsmith49",
"followers_url": "https://api.github.com/users/rsmith49/followers",
"following_url": "https://api.github.com/users/rsmith49/following{/other_user}",
"gists_url": "https://api.github.com/users/rsmith49/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rsmith49/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rsmith49/subscriptions",
"organizations_url": "https://api.github.com/users/rsmith49/orgs",
"repos_url": "https://api.github.com/users/rsmith49/repos",
"events_url": "https://api.github.com/users/rsmith49/events{/privacy}",
"received_events_url": "https://api.github.com/users/rsmith49/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"I'm not sure why you think this should be supported. `str(Tmp.BERT)` is `'Tmp.BERT'`, which is not a valid identifier to pass to `from_pretrained`. You need to pass `Tmp.BERT.value`.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,682 | 1,686 | 1,686 |
NONE
| null |
### System Info
Python version 3.8
`transformers==4.28.1`
Ubuntu
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
When using a str Enum (as specified [here](https://docs.python.org/3.10/library/enum.html#others) in the python docs) as input to `AutoTokenizer.from_pretrained`, the model name that gets searched is different from the member value of the Enum. Example to repro:
```
from enum import Enum
from transformers import AutoTokenizer
class Tmp(str, Enum):
BERT = 'bert-base-uncased'
t = AutoTokenizer.from_pretrained(Tmp.BERT)
```
Error:
```
Traceback (most recent call last):
File "/home/ubuntu/test_env/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py", line 259, in hf_raise_for_status
response.raise_for_status()
File "/home/ubuntu/test_env/lib/python3.8/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/Tmp.BERT/resolve/main/tokenizer_config.json
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/ubuntu/test_env/lib/python3.8/site-packages/transformers/utils/hub.py", line 409, in cached_file
resolved_file = hf_hub_download(
File "/home/ubuntu/test_env/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 120, in _inner_fn
return fn(*args, **kwargs)
File "/home/ubuntu/test_env/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 1195, in hf_hub_download
metadata = get_hf_file_metadata(
File "/home/ubuntu/test_env/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 120, in _inner_fn
return fn(*args, **kwargs)
File "/home/ubuntu/test_env/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 1541, in get_hf_file_metadata
hf_raise_for_status(r)
File "/home/ubuntu/test_env/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py", line 291, in hf_raise_for_status
raise RepositoryNotFoundError(message, response) from e
huggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-644c4a27-5bd929b32085d52d1a1b4b30)
Repository Not Found for url: https://huggingface.co/Tmp.BERT/resolve/main/tokenizer_config.json.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated.
Invalid username or password.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ubuntu/test_env/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 642, in from_pretrained
tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs)
File "/home/ubuntu/test_env/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 486, in get_tokenizer_config
resolved_config_file = cached_file(
File "/home/ubuntu/test_env/lib/python3.8/site-packages/transformers/utils/hub.py", line 424, in cached_file
raise EnvironmentError(
OSError: Tmp.BERT is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`.
```
### Expected behavior
We should see the model being searched for use the string value of the Enum member, instead of a different value (I haven't dug in to see what is being used instead).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23053/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23053/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23052
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23052/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23052/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23052/events
|
https://github.com/huggingface/transformers/pull/23052
| 1,689,051,708 |
PR_kwDOCUB6oc5PaaWk
| 23,052 |
Generate: prepare assisted generation for release
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,682 | 1,682 | 1,682 |
MEMBER
| null |
# What does this PR do?
This PR makes a few final adjustments to assisted generation before its release, including:
1. Merge previously named step 7 [the forward pass after matching with assistant tokens] into step 6 [slicing variables based on the number of matches] -- the variables are already computed in step 3 [selecting the model's next tokens based on the logits], so the code becomes more concise and it helps me explain what's going on more easily. See the (partially bugged) gif below, which is WIP for the blog post.
2. Swaps the order of step 6 [slicing variables] with step 5 [updating the number of candidates for the next iteration] -- makes more sense that the update step for the next iteration is the last one :)
3. Better variable names and improved comments (so the implementation becomes self-documenting)

|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23052/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23052/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23052",
"html_url": "https://github.com/huggingface/transformers/pull/23052",
"diff_url": "https://github.com/huggingface/transformers/pull/23052.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23052.patch",
"merged_at": 1682762010000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23051
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23051/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23051/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23051/events
|
https://github.com/huggingface/transformers/pull/23051
| 1,688,863,668 |
PR_kwDOCUB6oc5PZxfj
| 23,051 |
Fixed default config for `Pix2Struct` model to set `Pix2StructTextModel` to `is_decoder=True`
|
{
"login": "gbarello-uipath",
"id": 48561156,
"node_id": "MDQ6VXNlcjQ4NTYxMTU2",
"avatar_url": "https://avatars.githubusercontent.com/u/48561156?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbarello-uipath",
"html_url": "https://github.com/gbarello-uipath",
"followers_url": "https://api.github.com/users/gbarello-uipath/followers",
"following_url": "https://api.github.com/users/gbarello-uipath/following{/other_user}",
"gists_url": "https://api.github.com/users/gbarello-uipath/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbarello-uipath/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbarello-uipath/subscriptions",
"organizations_url": "https://api.github.com/users/gbarello-uipath/orgs",
"repos_url": "https://api.github.com/users/gbarello-uipath/repos",
"events_url": "https://api.github.com/users/gbarello-uipath/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbarello-uipath/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Just updated all pix2struct checkpoints that are under `google` org! Thanks again @gbarello-uipath for flagging this",
"Shouldn't the matcha and deplot checkpoints be updated as well?",
"Good point, will update them now",
"Just updated them! Thanks for flagging @RainbowMan1 "
] | 1,682 | 1,683 | 1,683 |
CONTRIBUTOR
| null |
Previously, the `Pix2StructTextModel` was configured with `is_decoder=False` by default causing the attention mask used for self-attention to be non-causal and causing fine-tuning to fail.
As a fix, this PR adds `is_decoder=True` default kwarg to the `Pix2StructTextConfig` class in order to correctly configure the text model as a decoder.
Fixes # 22903
@younesbelkada
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23051/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23051/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23051",
"html_url": "https://github.com/huggingface/transformers/pull/23051",
"diff_url": "https://github.com/huggingface/transformers/pull/23051.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23051.patch",
"merged_at": 1683049241000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23050
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23050/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23050/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23050/events
|
https://github.com/huggingface/transformers/issues/23050
| 1,688,575,885 |
I_kwDOCUB6oc5kpZ-N
| 23,050 |
[New model] 🐸TTS advanced Text-to-Speech
|
{
"login": "jozefchutka",
"id": 750041,
"node_id": "MDQ6VXNlcjc1MDA0MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/750041?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jozefchutka",
"html_url": "https://github.com/jozefchutka",
"followers_url": "https://api.github.com/users/jozefchutka/followers",
"following_url": "https://api.github.com/users/jozefchutka/following{/other_user}",
"gists_url": "https://api.github.com/users/jozefchutka/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jozefchutka/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jozefchutka/subscriptions",
"organizations_url": "https://api.github.com/users/jozefchutka/orgs",
"repos_url": "https://api.github.com/users/jozefchutka/repos",
"events_url": "https://api.github.com/users/jozefchutka/events{/privacy}",
"received_events_url": "https://api.github.com/users/jozefchutka/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false | null |
[] |
[
"Hi @jozefchutka I would like to work on this issue, I see multiple models under [Implemented Models](https://github.com/coqui-ai/TTS/tree/dev#implemented-models) on your link, do you have any recommendation about which one to start first?",
"Hi @susnato , thanks for looking into this. I hope to eventually run TTS in browser via (transformers.js), based on which my recommendation would be to pick a model that would be suitable in terms of performance / size",
"Hi @jozefchutka thanks for replying, I was thinking about Speedy-Speech but I didn't see that model inside of `TTS/tts/models` in dev branch, am I looking in wrong branch?",
"I have no idea honestly. But I have just discovered github provides very nice code browsing view, including search.\r\n\r\n\r\nIf its nowhere to find, it would be worth to reach out to 🐸TTS team\r\n",
"cc @sanchit-gandhi ",
"Hey @jozefchutka and @susnato - Coqui were previously focused on providing strong open-source TTS checkpoints, however in the last year they pivoted to more end-user services (see https://twitter.com/coqui_ai/status/1638573847296499712). They haven't been open-sourcing these latest models, and as a result their open-source checkpoints have fallen by the wayside a bit compared to the latest TTS research (e.g. VALL-E, Bark, MQTTS). I would say that a year ago it would have been a very exciting addition, but now there are more performant checkpoints that are growing in popularity amongst the open-source community. I would recommend checking out the aforementioned models if you're interested in a TTS model integration! Also see related https://github.com/huggingface/transformers/issues/22487#issuecomment-1496340245",
"Hi @sanchit-gandhi thanks for replying! Actually I was going through the same issue and saw your [comment](https://github.com/huggingface/transformers/issues/22487#issuecomment-1496312713) -\r\n\r\n>Indeed, a TTS pipeline would be super helpful to run SpeechT5. We're currently planning on waiting till we have 1-2 more TTS models in the library before pushing ahead with a TTS pipeline, in order to verify that the pipeline is generalisable and gives a benefit over loading a single model + processor.\r\n\r\nI was hoping to somehow contribute to the TTS pipeline, but now that you said \r\n>They haven't been open-sourcing these latest models, and as a result their open-source checkpoints have fallen by the wayside a bit compared to the latest TTS research (e.g. VALL-E, Bark, MQTTS)\r\n\r\nis a TTS pipeiline still in queue or should I focus on others like https://paperswithcode.com/task/text-to-speech-synthesis ?\r\n\r\n\r\n\r\n",
"Hi @sanchit-gandhi @susnato thanks for the insights. If there are better alternatives please go for it. ",
"IMO the TTS pipeline will be worth pursuing once the two ongoing TTS PRs are complete:\r\n* Bark #23375\r\n* FastSpeech2 #23439 \r\n\r\n=> we'd then have three models on which to base the TTS pipeline!\r\n\r\nRight now I think these are probably the most worthwhile TTS models to work on in transformers? There's also MQTTS: https://github.com/b04901014/MQTTS But that hasn't gained much traction. Do you know of any other recent TTS models that are gaining popularity amongst the community that we might have missed?",
"The only other bookmark I have is https://github.com/elevenlabs/elevenlabs-python , but that doesnt seem open model, just API? Worth for someone with better understanding in field to research.",
"As far as I understand, ElevenLabs is only a paid API @jozefchutka, but definitely a performant low-latency model. Interestingly a new ElevenLabs demo popped-up on the HF Hub: https://huggingface.co/spaces/elevenlabs/tts So potentially they're trying to increase their OS presence?",
"My understanding is the same",
"Hi @sanchit-gandhi my knowledge about recent TTS models is very limited, but I read about some of them maybe they are worth adding - how about [Tacotron 2](https://arxiv.org/pdf/1712.05884.pdf)(an implementation by NVIDIA [here](https://github.com/NVIDIA/tacotron2)) or [Parallel Tacotron 2: A Non-Autoregressive Neural TTS Model with Differentiable Duration Modeling](https://arxiv.org/pdf/2103.14574.pdf) . Also I found some unofficial implementations for [VALL-E](https://arxiv.org/pdf/2301.02111.pdf) - [lifeiteng/vall-e](https://github.com/lifeiteng/vall-e/tree/main) and [enhuiz/vall-e](https://github.com/enhuiz/vall-e) but both are without pretrained weights or [TransformerTTS](https://arxiv.org/pdf/1809.08895.pdf) (an PaddlePaddle implementation [here](https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/paddlespeech/t2s/models/transformer_tts/transformer_tts.py) and weights [here](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/ljspeech/tts1)). \r\n\r\nIf they are not as interesting then I would like to implement MQTTS. What do you think?",
"There is one more project I have just discovered called Piper https://github.com/rhasspy/piper\r\n- MIT license\r\n- sounds natural https://rhasspy.github.io/piper-samples/\r\n- onnx voices available on https://huggingface.co/rhasspy/piper-voices/tree/v1.0.0/en/en_US being reasonable sized\r\n\r\n@susnato , @sanchit-gandhi please let me know if interested, or does this need a separate issue opened?",
"Hi @jozefchutka we are currently integrating `tortoise-tts` to [HF diffusers](https://github.com/huggingface/diffusers/pull/4106). \r\nI would be interested in adding this after this integration is over and also if it this model is approved by the maintainers. ",
"Thanks for flagging @jozefchutka! Tortoise holds a lot of promise since eventually we'll be able to fine-tune it - I think this means in the long-run we'll be able to build on it more than piper? WDYT?",
"My idea is to use tts model on web via transformers.js. It seems piper has reasonably sized voice models (~50MB) and faster than realtime performance (probably 10x?).\r\n\r\n```\r\nReal-time factor: 0.03615920479326211 (infer=0.743479167 sec, audio=20.56126984126984 sec)\r\n# 20 sec .wav was generated in 0.7 sec\r\n```\r\n\r\nI can not find .onnx models for tortoise-tts do you have any idea of size and performance?",
"Tortoise has it's name because it's pretty slow, even on GPU ;-)",
"Thats concerning, do you have any benchmark to share?",
"We should be able to speed it up quite a bit in `diffusers` with torch compile, flash attention, and scheduler choice (similar to the optimisations presented in this blog post: https://huggingface.co/blog/audioldm2)"
] | 1,682 | 1,696 | null |
NONE
| null |
### Model description
🐸TTS is a library for advanced Text-to-Speech generation. It's built on the latest research, was designed to achieve the best trade-off among ease-of-training, speed and quality. 🐸TTS comes with pretrained models, tools for measuring dataset quality and already used in 20+ languages for products and research projects.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
GithHub repo: https://github.com/coqui-ai/TTS
Samples: http://erogol.com/ddc-samples/
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23050/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23050/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/23049
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23049/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23049/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23049/events
|
https://github.com/huggingface/transformers/pull/23049
| 1,688,512,672 |
PR_kwDOCUB6oc5PYleV
| 23,049 |
[docs] Doc TOC updates
|
{
"login": "MKhalusova",
"id": 1065417,
"node_id": "MDQ6VXNlcjEwNjU0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MKhalusova",
"html_url": "https://github.com/MKhalusova",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions",
"organizations_url": "https://api.github.com/users/MKhalusova/orgs",
"repos_url": "https://api.github.com/users/MKhalusova/repos",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"received_events_url": "https://api.github.com/users/MKhalusova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
This PR restructures TOC for the documentation. All the previous links remain working (except the two pages that have been removed: migration guide and converting from TF).
Here's the scope of the restructure:
a) TOC is sorted from “beginner” topics to more advanced making it easier to know where an answer to a question might be
b) Some topics have been renamed to be concise with the rest in the same section and (in some cases) more descriptive
c) Task Guides are collapsed by default and now are on on the same level (currently NLP task guides are hidden, and not aligned with other modalities)
d) “General usage” has been renamed to “Developer Guides”
e) Benchmarks, notebooks, and community resources have been moved under Developer Guides
f) “Converting from TensorFlow checkpoints” and "Migrating from previous packages" pages removed
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23049/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23049/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23049",
"html_url": "https://github.com/huggingface/transformers/pull/23049",
"diff_url": "https://github.com/huggingface/transformers/pull/23049.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23049.patch",
"merged_at": 1682688269000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23048
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23048/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23048/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23048/events
|
https://github.com/huggingface/transformers/pull/23048
| 1,688,490,081 |
PR_kwDOCUB6oc5PYgh2
| 23,048 |
🌐 [i18n-KO] Translated `tasks/image_classification.mdx` to Korean
|
{
"login": "0525hhgus",
"id": 47289574,
"node_id": "MDQ6VXNlcjQ3Mjg5NTc0",
"avatar_url": "https://avatars.githubusercontent.com/u/47289574?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/0525hhgus",
"html_url": "https://github.com/0525hhgus",
"followers_url": "https://api.github.com/users/0525hhgus/followers",
"following_url": "https://api.github.com/users/0525hhgus/following{/other_user}",
"gists_url": "https://api.github.com/users/0525hhgus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/0525hhgus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/0525hhgus/subscriptions",
"organizations_url": "https://api.github.com/users/0525hhgus/orgs",
"repos_url": "https://api.github.com/users/0525hhgus/repos",
"events_url": "https://api.github.com/users/0525hhgus/events{/privacy}",
"received_events_url": "https://api.github.com/users/0525hhgus/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"May you please review this PR? 😄 \r\n@sgugger, @ArthurZucker, @eunseojo "
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
<!-- PR의 제목은 "🌐 [i18n-KO] Translated `<your_file>.mdx` to Korean" 으로 부탁드립니다 -->
# What does this PR do?
Translated the `tasks/image_classification.mdx` file of the documentation to Korean.
Thank you in advance for your review 😄
Part of https://github.com/huggingface/transformers/issues/20179
<!-- 메인 이슈에 기록이 남아요! 가짜연구소 리포를 사용해 연습하실때는 제거해주시면 감사하겠습니다! :smile: -->
## Before reviewing
- [x] Check for missing / redundant translations (번역 누락/중복 검사)
- [x] Grammar Check (맞춤법 검사)
- [x] Review or Add new terms to glossary (용어 확인 및 추가)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
<!-- 1. 위 체크가 모두 완료된 뒤에만 가짜연구소 팀원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
Team PseudoLab, may you please review this PR?
@0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. 가짜연구소 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
May you please review this PR?
@sgugger, @ArthurZucker, @eunseojo
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23048/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23048/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23048",
"html_url": "https://github.com/huggingface/transformers/pull/23048",
"diff_url": "https://github.com/huggingface/transformers/pull/23048.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23048.patch",
"merged_at": 1682949006000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23047
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23047/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23047/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23047/events
|
https://github.com/huggingface/transformers/issues/23047
| 1,688,244,366 |
I_kwDOCUB6oc5koJCO
| 23,047 |
FLAVA: module 'torch.distributed.nn.functional' has no attribute 'all_gather_with_backprop'
|
{
"login": "amariucaitheodor",
"id": 32778667,
"node_id": "MDQ6VXNlcjMyNzc4NjY3",
"avatar_url": "https://avatars.githubusercontent.com/u/32778667?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amariucaitheodor",
"html_url": "https://github.com/amariucaitheodor",
"followers_url": "https://api.github.com/users/amariucaitheodor/followers",
"following_url": "https://api.github.com/users/amariucaitheodor/following{/other_user}",
"gists_url": "https://api.github.com/users/amariucaitheodor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amariucaitheodor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amariucaitheodor/subscriptions",
"organizations_url": "https://api.github.com/users/amariucaitheodor/orgs",
"repos_url": "https://api.github.com/users/amariucaitheodor/repos",
"events_url": "https://api.github.com/users/amariucaitheodor/events{/privacy}",
"received_events_url": "https://api.github.com/users/amariucaitheodor/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @younesbelkada and @amyeroberts ",
"Hi @amariucaitheodor \r\nThis sounds like being the correct fix in my opinion. I can't find that method either on the PT documentation, and I guess we never flagged that issue as not many users have run the model in a distributed mode..\r\nWould you mind opening a PR for that? If you can't, happy to do it! "
] | 1,682 | 1,683 | 1,683 |
NONE
| null |
https://github.com/huggingface/transformers/blob/a0e733283930bdb9ae2b1afdc53ec5f2daefb033/src/transformers/models/flava/modeling_flava.py#L1696
The following error is thrown when running FLAVA with PyTorch 2.0 and `global_backprop_contrastive=True`: `AttributeError: module 'torch.distributed.nn.functional' has no attribute 'all_gather_with_backprop'`. As far as I know, this attribute never existed in PyTorch.
The bug might have to do with the fact that `all_gather` is renamed to `all_gather_with_backprop` in [facebookresearch/multimodal](https://github.com/facebookresearch/multimodal) and this could have been copied over: https://github.com/facebookresearch/multimodal/blob/c6f6e44ec6e0addfdf01695db860a6febeb2d88b/torchmultimodal/utils/distributed.py#L12
A rename to `all_gather` should fix this I think.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23047/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23047/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23046
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23046/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23046/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23046/events
|
https://github.com/huggingface/transformers/pull/23046
| 1,688,222,606 |
PR_kwDOCUB6oc5PXmko
| 23,046 |
[Doctest] Add new checks
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Will add all the files to the IGNORE_DOC_NON_TESTED",
"Mostly the intent is to make sure new models that are added to the library are doctested (while currently the reviewer has to make sure it is added, but we usually forget)",
"Looking at the new `check_pr_documentation_tests` that only checks files that are modified + in the `documentation_test.txt` , my initial goal of making sure that new models are tested is not attained. This PR will adress that, by adding a check to make sure that new model addition are adding the files to test them. cc @ydshieh ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Sure! Closing this."
] | 1,682 | 1,687 | 1,687 |
COLLABORATOR
| null |
# What does this PR do?
Checks to make sure that the examples in any `doc/en/...` are tested
Checks to make sure that any model or config examples are also nightly tested
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23046/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23046/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23046",
"html_url": "https://github.com/huggingface/transformers/pull/23046",
"diff_url": "https://github.com/huggingface/transformers/pull/23046.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23046.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23045
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23045/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23045/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23045/events
|
https://github.com/huggingface/transformers/pull/23045
| 1,688,120,939 |
PR_kwDOCUB6oc5PXQ5U
| 23,045 |
Cuda rng_state_all is used when saving in distributed mode so same should also be used when loading
|
{
"login": "ShivamShrirao",
"id": 37087513,
"node_id": "MDQ6VXNlcjM3MDg3NTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/37087513?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ShivamShrirao",
"html_url": "https://github.com/ShivamShrirao",
"followers_url": "https://api.github.com/users/ShivamShrirao/followers",
"following_url": "https://api.github.com/users/ShivamShrirao/following{/other_user}",
"gists_url": "https://api.github.com/users/ShivamShrirao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ShivamShrirao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ShivamShrirao/subscriptions",
"organizations_url": "https://api.github.com/users/ShivamShrirao/orgs",
"repos_url": "https://api.github.com/users/ShivamShrirao/repos",
"events_url": "https://api.github.com/users/ShivamShrirao/events{/privacy}",
"received_events_url": "https://api.github.com/users/ShivamShrirao/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
When saving in distributed mode this snippet uses `torch.cuda.random.get_rng_state_all()`.
https://github.com/huggingface/transformers/blob/a0e733283930bdb9ae2b1afdc53ec5f2daefb033/src/transformers/trainer.py#L2417-L2421
But while loading `torch.cuda.random.set_rng_state_all()` is not being used for distributed causing issues when resuming training.
https://github.com/huggingface/transformers/blob/a0e733283930bdb9ae2b1afdc53ec5f2daefb033/src/transformers/trainer.py#L2323-L2328
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23045/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23045/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23045",
"html_url": "https://github.com/huggingface/transformers/pull/23045",
"diff_url": "https://github.com/huggingface/transformers/pull/23045.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23045.patch",
"merged_at": 1682688482000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23044
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23044/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23044/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23044/events
|
https://github.com/huggingface/transformers/issues/23044
| 1,688,118,376 |
I_kwDOCUB6oc5knqRo
| 23,044 |
The 3D attention mask in the LongFormer is wrong
|
{
"login": "syiswell",
"id": 51729891,
"node_id": "MDQ6VXNlcjUxNzI5ODkx",
"avatar_url": "https://avatars.githubusercontent.com/u/51729891?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/syiswell",
"html_url": "https://github.com/syiswell",
"followers_url": "https://api.github.com/users/syiswell/followers",
"following_url": "https://api.github.com/users/syiswell/following{/other_user}",
"gists_url": "https://api.github.com/users/syiswell/gists{/gist_id}",
"starred_url": "https://api.github.com/users/syiswell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/syiswell/subscriptions",
"organizations_url": "https://api.github.com/users/syiswell/orgs",
"repos_url": "https://api.github.com/users/syiswell/repos",
"events_url": "https://api.github.com/users/syiswell/events{/privacy}",
"received_events_url": "https://api.github.com/users/syiswell/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,682 | 1,685 | 1,685 |
NONE
| null |
### Feature request
I used a 3d attention mask in the LongFormer, but also failed. I find that the code
attention_mask = nn.functional.pad(
attention_mask, (0, padding_len), value=0
) # no attention on the padding tokens
in line 1626 in modeling_longformer may not support the 3D attention mask. Please correct me if I am wrong
### Motivation
I want to input a 3d attention mask in the LongFormer to conduct the visible field for different tokens.
### Your contribution
posting a code snippet example
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23044/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23044/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23043
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23043/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23043/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23043/events
|
https://github.com/huggingface/transformers/pull/23043
| 1,688,101,184 |
PR_kwDOCUB6oc5PXMq5
| 23,043 |
extend the test files
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I need to check if the new test files exist before adding them to the list. Will update later ",
"> I have already given my thoughts a thousand times on how such failures should be fixed, but since I'm being ignored again...\r\n\r\n@sgugger I don't mean to ignore your suggestion, but the only discussion I remembered is [this slack discussion](https://huggingface.slack.com/archives/C01NE71C4F7/p1678714056980159?thread_ts=1678480555.678359&cid=C01NE71C4F7), where you mentioned (to my previous messages) with\r\n\r\n> Mmmm, maybe there is a post-processing function that could do that yes. (Note that the whole thing for the pipeline will disappear with the new test fecther and the new way pipeline tests are designed).\r\n\r\nTherefore I assume the approach in this PR (current version) is fine.\r\n\r\nI might forget anything you mentioned earlier somewhere else. In this case, please share the link, and I am happy to take a look your suggestion. Otherwise, feel free to drop what you think the best. Thank you.\r\n\r\n> This is not my preferred solution, but if we go for this, I'd like the added tests to be logged somewhere, so we can inspect the results. Otherwise we can't debug if there is a failure of the test fetcher, as the generated config is not exactly readable.\r\n\r\nIf we keep this approach, I am happy to save files of these modified version.\r\n\r\n",
"Note that I am not really in favor of duplicating the tests. We have these tests both in PT/TF or PT/Flax test files. It's already kind of duplication. And if we copy each version to another framework again, that is duplication of duplication.",
"Let's go with logging the modified test files somehow, so we can inspect the result of the code you add then.",
"Looks like it didn't touch any new file though.",
"> Looks like it didn't touch any new file though.\r\n\r\nYeah, I am bad at getting the correct path. Now it works (if I triggered the test via a change in a file), see \r\n\r\nhttps://app.circleci.com/pipelines/github/huggingface/transformers/63288/workflows/721ab3d6-b1af-4f2d-b036-9bebbdaef2cc/jobs/780955",
"@sgugger Could you check the last commit and see if it is OK? ~~(I will run a test to make sure everything works as expected tomorrow before merge, already tired today with some personal meetings)~~.\r\n\r\nSee [the new run page](https://app.circleci.com/pipelines/github/huggingface/transformers/63300/workflows/6cedeb02-795b-489f-8894-f5320ec64dd1/jobs/781128) and [here](https://app.circleci.com/pipelines/github/huggingface/transformers/63300/workflows/351a027d-9393-4c7d-a128-61cef5786f30/jobs/781146)\r\n\r\nOne thing to note is that, I put everything regarding cross tests into a single file as well as in the cross test jobs. So `test_modeling_tf_xxx.py` might be in the `tests_torch_and_flax` job and vice versa in some cases. It doesn't really matter as we correctly install the only necessary libraries in the job."
] | 1,682 | 1,682 | 1,682 |
COLLABORATOR
| null |
# What does this PR do?
Extend the test files in cross test CI job.
In #21023, flax bigbird modeling file is changed and the flax test file skips the pt/flax tests. However, the test fetcher is not designed to take into account the corresponding pytorch test file, and we have test failure in `main` on `nightly` run.
This PR extends the test files for `torch_and_tf` and `torch_and_flax` jobs to avoid such situation.
The effect could be seen in this run
https://app.circleci.com/pipelines/github/huggingface/transformers/63246/workflows/84722f3a-1259-4226-973d-267c74ca9aee/jobs/780372
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23043/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23043/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23043",
"html_url": "https://github.com/huggingface/transformers/pull/23043",
"diff_url": "https://github.com/huggingface/transformers/pull/23043.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23043.patch",
"merged_at": 1682713534000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23042
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23042/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23042/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23042/events
|
https://github.com/huggingface/transformers/issues/23042
| 1,688,042,727 |
I_kwDOCUB6oc5knXzn
| 23,042 |
Using `inputs_embeds` for generation gives an incorrect warning
|
{
"login": "zrthxn",
"id": 35369637,
"node_id": "MDQ6VXNlcjM1MzY5NjM3",
"avatar_url": "https://avatars.githubusercontent.com/u/35369637?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zrthxn",
"html_url": "https://github.com/zrthxn",
"followers_url": "https://api.github.com/users/zrthxn/followers",
"following_url": "https://api.github.com/users/zrthxn/following{/other_user}",
"gists_url": "https://api.github.com/users/zrthxn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zrthxn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zrthxn/subscriptions",
"organizations_url": "https://api.github.com/users/zrthxn/orgs",
"repos_url": "https://api.github.com/users/zrthxn/repos",
"events_url": "https://api.github.com/users/zrthxn/events{/privacy}",
"received_events_url": "https://api.github.com/users/zrthxn/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @gante ",
"Hey @zrthxn 👋 Splitting my reply in two parts, the warning and the generation from input embeds.\r\n\r\nWarning: agreed, it should check e.g. whether the input tensor has 3 or more dims (and don't emit the warning it that case). Would you like to open a PR to fix it? :) (I think the same issue is present in TF and FLAX as well)\r\n\r\nGeneration: I've double-checked generation with input embeddings, and everything seems fine. Have a look at the example below\r\n```py\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\"huggyllama/llama-7b\")\r\ntokenizer = AutoTokenizer.from_pretrained(\"huggyllama/llama-7b\")\r\n\r\ntext = \"Hello world\"\r\ninput_ids = tokenizer.encode(text, return_tensors=\"pt\")\r\n\r\n# Traditional way of generating text\r\noutputs = model.generate(input_ids)\r\nprint(\"\\ngenerate + input_ids:\", tokenizer.decode(outputs[0], skip_special_tokens=True))\r\n\r\n# From inputs_embeds -- exact same output if you also pass `input_ids`. If you don't\r\n# pass `input_ids`, you will get the same generated content but without the prompt\r\ninputs_embeds = model.model.embed_tokens(input_ids)\r\noutputs = model.generate(input_ids, inputs_embeds=inputs_embeds)\r\nprint(\"\\ngenerate + inputs_embeds:\", tokenizer.decode(outputs[0], skip_special_tokens=True))\r\n```",
"@gante I confirmed once again and found that the `input_embeds` works. The problem was something I was doing with my embeddings. And yes, I'll create a PR for the warning.",
"> Hey @zrthxn 👋 Splitting my reply in two parts, the warning and the generation from input embeds.\r\n> \r\n> Warning: agreed, it should check e.g. whether the input tensor has 3 or more dims (and don't emit the warning it that case). Would you like to open a PR to fix it? :) (I think the same issue is present in TF and FLAX as well)\r\n> \r\n> Generation: I've double-checked generation with input embeddings, and everything seems fine. Have a look at the example below\r\n> \r\n> ```python\r\n> from transformers import AutoModelForCausalLM, AutoTokenizer\r\n> \r\n> model = AutoModelForCausalLM.from_pretrained(\"huggyllama/llama-7b\")\r\n> tokenizer = AutoTokenizer.from_pretrained(\"huggyllama/llama-7b\")\r\n> \r\n> text = \"Hello world\"\r\n> input_ids = tokenizer.encode(text, return_tensors=\"pt\")\r\n> \r\n> # Traditional way of generating text\r\n> outputs = model.generate(input_ids)\r\n> print(\"\\ngenerate + input_ids:\", tokenizer.decode(outputs[0], skip_special_tokens=True))\r\n> \r\n> # From inputs_embeds -- exact same output if you also pass `input_ids`. If you don't\r\n> # pass `input_ids`, you will get the same generated content but without the prompt\r\n> inputs_embeds = model.model.embed_tokens(input_ids)\r\n> outputs = model.generate(input_ids, inputs_embeds=inputs_embeds)\r\n> print(\"\\ngenerate + inputs_embeds:\", tokenizer.decode(outputs[0], skip_special_tokens=True))\r\n> ```\r\n\r\nI've tested out your example @gante and everythink works fine. However when i switch model to `lmsys/vicuna-13b-v1.3` i'm getting error. Do you know what is the difference? I'm assuming that both models share the same implementations in `transformers.models.llama.modeling_llama.LlamaForCausalLM`. \r\n\r\nMy code\r\n```python\r\nimport torch\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\r\n \"lmsys/vicuna-13b-v1.3\",\r\n load_in_8bit=True,\r\n torch_dtype=torch.float16,\r\n device_map=\"auto\",\r\n)\r\ntokenizer = AutoTokenizer.from_pretrained(\"lmsys/vicuna-13b-v1.3\")\r\n\r\n\r\ntext = \"Hello world\"\r\ninput_ids = tokenizer.encode(text, return_tensors=\"pt\").to(model.device)\r\n\r\ninputs_embeds = model.model.embed_tokens(input_ids)\r\noutputs = model.generate(inputs_embeds=inputs_embeds, max_new_tokens=10)\r\nprint(\r\n \"\\ngenerate + inputs_embeds:\",\r\n tokenizer.decode(outputs[0], skip_special_tokens=True),\r\n)\r\n```\r\nStack trace\r\n```\r\n---------------------------------------------------------------------------\r\nRuntimeError Traceback (most recent call last)\r\nCell In[3], line 5\r\n 2 input_ids = tokenizer.encode(text, return_tensors=\"pt\").to(model.device)\r\n 4 inputs_embeds = model.model.embed_tokens(input_ids)\r\n----> 5 outputs = model.generate(inputs_embeds=inputs_embeds, max_new_tokens=10)\r\n 6 print(\"\\ngenerate + inputs_embeds:\", tokenizer.decode(outputs[0], skip_special_tokens=True))\r\n\r\nFile [~/miniconda3/envs/InstructZero/lib/python3.10/site-packages/torch/autograd/grad_mode.py:27](https://vscode-remote+ssh-002dremote-002bjaskier.vscode-resource.vscode-cdn.net/home/nropiak/git/InstructZero/~/miniconda3/envs/InstructZero/lib/python3.10/site-packages/torch/autograd/grad_mode.py:27), in _DecoratorContextManager.__call__..decorate_context(*args, **kwargs)\r\n 24 @functools.wraps(func)\r\n 25 def decorate_context(*args, **kwargs):\r\n 26 with self.clone():\r\n---> 27 return func(*args, **kwargs)\r\n\r\nFile [~/miniconda3/envs/InstructZero/lib/python3.10/site-packages/transformers/generation/utils.py:1522](https://vscode-remote+ssh-002dremote-002bjaskier.vscode-resource.vscode-cdn.net/home/nropiak/git/InstructZero/~/miniconda3/envs/InstructZero/lib/python3.10/site-packages/transformers/generation/utils.py:1522), in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, assistant_model, streamer, **kwargs)\r\n 1516 raise ValueError(\r\n 1517 \"num_return_sequences has to be 1 when doing greedy search, \"\r\n 1518 f\"but is {generation_config.num_return_sequences}.\"\r\n 1519 )\r\n 1521 # 11. run greedy search\r\n-> 1522 return self.greedy_search(\r\n 1523 input_ids,\r\n 1524 logits_processor=logits_processor,\r\n 1525 stopping_criteria=stopping_criteria,\r\n 1526 pad_token_id=generation_config.pad_token_id,\r\n 1527 eos_token_id=generation_config.eos_token_id,\r\n 1528 output_scores=generation_config.output_scores,\r\n 1529 return_dict_in_generate=generation_config.return_dict_in_generate,\r\n 1530 synced_gpus=synced_gpus,\r\n 1531 streamer=streamer,\r\n 1532 **model_kwargs,\r\n 1533 )\r\n 1535 elif is_contrastive_search_gen_mode:\r\n 1536 if generation_config.num_return_sequences > 1:\r\n\r\nFile [~/miniconda3/envs/InstructZero/lib/python3.10/site-packages/transformers/generation/utils.py:2339](https://vscode-remote+ssh-002dremote-002bjaskier.vscode-resource.vscode-cdn.net/home/nropiak/git/InstructZero/~/miniconda3/envs/InstructZero/lib/python3.10/site-packages/transformers/generation/utils.py:2339), in GenerationMixin.greedy_search(self, input_ids, logits_processor, stopping_criteria, max_length, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, synced_gpus, streamer, **model_kwargs)\r\n 2336 model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)\r\n 2338 # forward pass to get next token\r\n-> 2339 outputs = self(\r\n 2340 **model_inputs,\r\n 2341 return_dict=True,\r\n 2342 output_attentions=output_attentions,\r\n 2343 output_hidden_states=output_hidden_states,\r\n 2344 )\r\n 2346 if synced_gpus and this_peer_finished:\r\n 2347 continue # don't waste resources running the code we don't need\r\n\r\nFile [~/miniconda3/envs/InstructZero/lib/python3.10/site-packages/torch/nn/modules/module.py:1194](https://vscode-remote+ssh-002dremote-002bjaskier.vscode-resource.vscode-cdn.net/home/nropiak/git/InstructZero/~/miniconda3/envs/InstructZero/lib/python3.10/site-packages/torch/nn/modules/module.py:1194), in Module._call_impl(self, *input, **kwargs)\r\n 1190 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1191 # this function, and just call forward.\r\n 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\r\n 1193 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1194 return forward_call(*input, **kwargs)\r\n 1195 # Do not call functions when jit is used\r\n 1196 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\nFile [~/miniconda3/envs/InstructZero/lib/python3.10/site-packages/accelerate/hooks.py:165](https://vscode-remote+ssh-002dremote-002bjaskier.vscode-resource.vscode-cdn.net/home/nropiak/git/InstructZero/~/miniconda3/envs/InstructZero/lib/python3.10/site-packages/accelerate/hooks.py:165), in add_hook_to_module..new_forward(*args, **kwargs)\r\n 163 output = old_forward(*args, **kwargs)\r\n 164 else:\r\n--> 165 output = old_forward(*args, **kwargs)\r\n 166 return module._hf_hook.post_forward(module, output)\r\n\r\nFile [~/miniconda3/envs/InstructZero/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py:688](https://vscode-remote+ssh-002dremote-002bjaskier.vscode-resource.vscode-cdn.net/home/nropiak/git/InstructZero/~/miniconda3/envs/InstructZero/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py:688), in LlamaForCausalLM.forward(self, input_ids, attention_mask, position_ids, past_key_values, inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)\r\n 685 return_dict = return_dict if return_dict is not None else self.config.use_return_dict\r\n 687 # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)\r\n--> 688 outputs = self.model(\r\n 689 input_ids=input_ids,\r\n 690 attention_mask=attention_mask,\r\n 691 position_ids=position_ids,\r\n 692 past_key_values=past_key_values,\r\n 693 inputs_embeds=inputs_embeds,\r\n 694 use_cache=use_cache,\r\n 695 output_attentions=output_attentions,\r\n 696 output_hidden_states=output_hidden_states,\r\n 697 return_dict=return_dict,\r\n 698 )\r\n 700 hidden_states = outputs[0]\r\n 701 logits = self.lm_head(hidden_states)\r\n\r\nFile [~/miniconda3/envs/InstructZero/lib/python3.10/site-packages/torch/nn/modules/module.py:1194](https://vscode-remote+ssh-002dremote-002bjaskier.vscode-resource.vscode-cdn.net/home/nropiak/git/InstructZero/~/miniconda3/envs/InstructZero/lib/python3.10/site-packages/torch/nn/modules/module.py:1194), in Module._call_impl(self, *input, **kwargs)\r\n 1190 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1191 # this function, and just call forward.\r\n 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\r\n 1193 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1194 return forward_call(*input, **kwargs)\r\n 1195 # Do not call functions when jit is used\r\n 1196 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\nFile [~/miniconda3/envs/InstructZero/lib/python3.10/site-packages/accelerate/hooks.py:165](https://vscode-remote+ssh-002dremote-002bjaskier.vscode-resource.vscode-cdn.net/home/nropiak/git/InstructZero/~/miniconda3/envs/InstructZero/lib/python3.10/site-packages/accelerate/hooks.py:165), in add_hook_to_module..new_forward(*args, **kwargs)\r\n 163 output = old_forward(*args, **kwargs)\r\n 164 else:\r\n--> 165 output = old_forward(*args, **kwargs)\r\n 166 return module._hf_hook.post_forward(module, output)\r\n\r\nFile [~/miniconda3/envs/InstructZero/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py:528](https://vscode-remote+ssh-002dremote-002bjaskier.vscode-resource.vscode-cdn.net/home/nropiak/git/InstructZero/~/miniconda3/envs/InstructZero/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py:528), in LlamaModel.forward(self, input_ids, attention_mask, position_ids, past_key_values, inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict)\r\n 526 position_ids = position_ids.unsqueeze(0).view(-1, seq_length)\r\n 527 else:\r\n--> 528 position_ids = position_ids.view(-1, seq_length).long()\r\n 530 if inputs_embeds is None:\r\n 531 inputs_embeds = self.embed_tokens(input_ids)\r\n\r\nRuntimeError: shape '[-1, 3]' is invalid for input of size 4\r\n```\r\n\r\n",
"@NorbertRop The issue is fixed in #24639 🙌 (see the PR if you're curious about why it was breaking :) )",
"@NorbertRop should be fixed if you install from `main`"
] | 1,682 | 1,688 | 1,683 |
CONTRIBUTOR
| null |
I'm trying to use the `inputs_embeds` parameter to run the LLaMA model. This is part of my code.
```python
# INPUT = ...embedding of a sequence, ensuring that there are no pad tokens
output_sequences = LLaMA.generate(
inputs_embeds=INPUT.to(device)
pad_token_id=tokenizer.pad_token_id,
# ... generation parameters, top_p top_k etc.
)
```
I keep getting this warning, and the results are complete gibberish. I know this exact model performs well if I pass `input_ids`.
```
A decoder-only architecture is being used, but right-padding was detected! For correct generation results, please set padding_side='left' when initializing the tokenizer.
```
After a lot of debugging, I found that this issue is because of the transformers library itself. The generate function checks that the last token ID in every batch should not be the pad token ID. If it is, it displays this warning.
https://github.com/huggingface/transformers/blob/a0e733283930bdb9ae2b1afdc53ec5f2daefb033/src/transformers/generation/utils.py#L1308-L1315
The `generate` function is expecting the shape `(Batch, Sequence)` where this logic would work.
```python
inputs_tensor[:, -1] == generation_config.pad_token_id
```
Now the problem is that I am passing `inputs_embeds` not IDs. My shape is `(Batch, Sequence, EmbeddingSize)`, so the above statement would be true if there are any zeros in the embedding of the last token. This is obviously incorrect.
That explains the warning but not the incorrect generation.
### Environment
- `transformers==4.28.0`
- Python 3.10.11
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23042/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23042/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23041
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23041/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23041/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23041/events
|
https://github.com/huggingface/transformers/issues/23041
| 1,688,025,727 |
I_kwDOCUB6oc5knTp_
| 23,041 |
Hugging Face - Time Series Transformer Error
|
{
"login": "modiparv",
"id": 93371014,
"node_id": "U_kgDOBZC6hg",
"avatar_url": "https://avatars.githubusercontent.com/u/93371014?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/modiparv",
"html_url": "https://github.com/modiparv",
"followers_url": "https://api.github.com/users/modiparv/followers",
"following_url": "https://api.github.com/users/modiparv/following{/other_user}",
"gists_url": "https://api.github.com/users/modiparv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/modiparv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/modiparv/subscriptions",
"organizations_url": "https://api.github.com/users/modiparv/orgs",
"repos_url": "https://api.github.com/users/modiparv/repos",
"events_url": "https://api.github.com/users/modiparv/events{/privacy}",
"received_events_url": "https://api.github.com/users/modiparv/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"\r\n",
"cc @kashif ",
"@modiparv I believe you have configured the model with `num_static_categorical_features=0` and yet you are feeding the model static categorical covariates as the `static_categorical_features` input... perhaps kindly test without it and I can add an extra check there... ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,682 | 1,685 | 1,685 |
NONE
| null |
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/tmp/ipykernel_27/4172075850.py in <module>
8 static_real_features=batch1["static_real_features"],
9 future_values=batch1["future_values"],
---> 10 future_time_features=batch1["future_time_features"]
11 )
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1189 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190 return forward_call(*input, **kwargs)
1191 # Do not call functions when jit is used
1192 full_backward_hooks, non_full_backward_hooks = [], []
/opt/conda/lib/python3.7/site-packages/transformers/models/time_series_transformer/modeling_time_series_transformer.py in forward(self, past_values, past_time_features, past_observed_mask, static_categorical_features, static_real_features, future_values, future_time_features, future_observed_mask, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, output_hidden_states, output_attentions, use_cache, return_dict)
1611 output_attentions=output_attentions,
1612 use_cache=use_cache,
-> 1613 return_dict=return_dict,
1614 )
1615
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1189 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190 return forward_call(*input, **kwargs)
1191 # Do not call functions when jit is used
1192 full_backward_hooks, non_full_backward_hooks = [], []
/opt/conda/lib/python3.7/site-packages/transformers/models/time_series_transformer/modeling_time_series_transformer.py in forward(self, past_values, past_time_features, past_observed_mask, static_categorical_features, static_real_features, future_values, future_time_features, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, output_hidden_states, output_attentions, use_cache, return_dict)
1422 static_real_features=static_real_features,
1423 future_values=future_values,
-> 1424 future_time_features=future_time_features,
1425 )
1426
/opt/conda/lib/python3.7/site-packages/transformers/models/time_series_transformer/modeling_time_series_transformer.py in create_network_inputs(self, past_values, past_time_features, static_categorical_features, static_real_features, past_observed_mask, future_values, future_time_features)
1322 static_feat = torch.cat((static_real_features, static_feat), dim=1)
1323 if static_categorical_features is not None:
-> 1324 embedded_cat = self.embedder(static_categorical_features)
1325 static_feat = torch.cat((embedded_cat, static_feat), dim=1)
1326 expanded_static_feat = static_feat.unsqueeze(1).expand(-1, time_feat.shape[1], -1)
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in __getattr__(self, name)
1264 return modules[name]
1265 raise AttributeError("'{}' object has no attribute '{}'".format(
-> 1266 type(self).__name__, name))
1267
1268 def __setattr__(self, name: str, value: Union[Tensor, 'Module']) -> None:
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23041/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23041/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23040
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23040/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23040/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23040/events
|
https://github.com/huggingface/transformers/pull/23040
| 1,688,018,572 |
PR_kwDOCUB6oc5PW7OQ
| 23,040 |
Skip pt/flax equivalence tests in pytorch `bigbird` test file
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @Bearnardd",
"_The documentation is not available anymore as the PR was closed or merged._",
"Good catch @ydshieh! I will fix pytorch's big bird implementation today/tomorrow.",
"@Bearnardd Thank you 🤗 . Luckily, there is no `TFBigBirdModel` 😆 "
] | 1,682 | 1,682 | 1,682 |
COLLABORATOR
| null |
# What does this PR do?
#21023 fixed random attention issue in Flax bigbird model, and skipped the pt/flax equivalence tests in flax bigbird test file with
```txt
reason="Current Pytorch implementation has bug with random attention -> it always uses it not matter if we are in eval/train mode"
```
We need to skip the pt/flax equivalence tests in **pytorch** bigbird test file too.
Currently on `main`, the tests fail
https://app.circleci.com/pipelines/github/huggingface/transformers/63217/workflows/5d512271-f535-44be-a2ec-b95024f8f165/jobs/780069
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23040/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23040/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23040",
"html_url": "https://github.com/huggingface/transformers/pull/23040",
"diff_url": "https://github.com/huggingface/transformers/pull/23040.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23040.patch",
"merged_at": 1682694013000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23039
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23039/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23039/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23039/events
|
https://github.com/huggingface/transformers/pull/23039
| 1,687,898,388 |
PR_kwDOCUB6oc5PWhgx
| 23,039 |
Fix model parallelism for `BridgeTower`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,682 | 1,682 | 1,682 |
COLLABORATOR
| null |
# What does this PR do?
Make `BridgeTower` work with model parallelism. The test `test_model_parallelism` is still skipped as tiny version hits edge cases.
With larger values of `hidden_size`, `num_hidden_layers` etc., it works now while failed before.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23039/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23039/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23039",
"html_url": "https://github.com/huggingface/transformers/pull/23039",
"diff_url": "https://github.com/huggingface/transformers/pull/23039.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23039.patch",
"merged_at": 1682711638000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23038
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23038/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23038/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23038/events
|
https://github.com/huggingface/transformers/pull/23038
| 1,687,890,202 |
PR_kwDOCUB6oc5PWfxz
| 23,038 |
Update trainer_utils.py
|
{
"login": "mzamini92",
"id": 32536264,
"node_id": "MDQ6VXNlcjMyNTM2MjY0",
"avatar_url": "https://avatars.githubusercontent.com/u/32536264?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mzamini92",
"html_url": "https://github.com/mzamini92",
"followers_url": "https://api.github.com/users/mzamini92/followers",
"following_url": "https://api.github.com/users/mzamini92/following{/other_user}",
"gists_url": "https://api.github.com/users/mzamini92/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mzamini92/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mzamini92/subscriptions",
"organizations_url": "https://api.github.com/users/mzamini92/orgs",
"repos_url": "https://api.github.com/users/mzamini92/repos",
"events_url": "https://api.github.com/users/mzamini92/events{/privacy}",
"received_events_url": "https://api.github.com/users/mzamini92/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23038). All of your documentation changes will be reflected on that endpoint.",
"cc @muellerzr ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@muellerzr Could you have a look here?",
"@mzamini92 could you rebase so we can double check no tests are breaking with this and we can merge? Thanks!",
"> @mzamini92 could you rebase so we can double check no tests are breaking with this and we can merge? Thanks!\r\n\r\n@muellerzr Thanks for reaching me. I did it based on Sylvain suggestion. please double check and I will revise if needed."
] | 1,682 | 1,687 | 1,687 |
NONE
| null |
This modified version of the function includes a check for whether the output of function has a learning rate scheduler that needs to be updated based on the current batch size. If so, it updates the `num_batches` attribute of the scheduler to ensure that the learning rate is adjusted correctly.
# What does this PR do?
it can be one solution for the lr_scheduler not updated when auto_find_batch_size set to True and batch_size decays #21521 problem.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [+] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23038/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23038/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23038",
"html_url": "https://github.com/huggingface/transformers/pull/23038",
"diff_url": "https://github.com/huggingface/transformers/pull/23038.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23038.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23037
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23037/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23037/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23037/events
|
https://github.com/huggingface/transformers/pull/23037
| 1,687,786,901 |
PR_kwDOCUB6oc5PWKRt
| 23,037 |
Add LTG-BERT model
|
{
"login": "davda54",
"id": 937184,
"node_id": "MDQ6VXNlcjkzNzE4NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/937184?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davda54",
"html_url": "https://github.com/davda54",
"followers_url": "https://api.github.com/users/davda54/followers",
"following_url": "https://api.github.com/users/davda54/following{/other_user}",
"gists_url": "https://api.github.com/users/davda54/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davda54/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davda54/subscriptions",
"organizations_url": "https://api.github.com/users/davda54/orgs",
"repos_url": "https://api.github.com/users/davda54/repos",
"events_url": "https://api.github.com/users/davda54/events{/privacy}",
"received_events_url": "https://api.github.com/users/davda54/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23037). All of your documentation changes will be reflected on that endpoint.",
"Hey! Great that you want to share this model 🔥 Would you be open to put ot on the hub following [this tutorial](https://huggingface.co/docs/transformers/custom_models)! Will be easier as there won't be any CI issues and since it is very similar to an existing model, this makes more sense!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hey if you added the model on the hub could you share the links to it? This will help us keep track of the models that we support on the hub "
] | 1,682 | 1,686 | 1,685 |
NONE
| null |
# LTG-BERT
This pull request adds the custom LTG-BERT model into the repository. This optimized LM architecture was introduced [in this paper](https://arxiv.org/abs/2303.09859) and is currently also used by a new generation of Norwegian LMs. The architecture features multiple improvements to the standard transformer module and we unfortunately cannot use any existing HF model wrappers.
@ArthurZucker and @younesbelkada
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23037/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23037/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23037",
"html_url": "https://github.com/huggingface/transformers/pull/23037",
"diff_url": "https://github.com/huggingface/transformers/pull/23037.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23037.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23036
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23036/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23036/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23036/events
|
https://github.com/huggingface/transformers/issues/23036
| 1,687,615,988 |
I_kwDOCUB6oc5klvn0
| 23,036 |
[New model] Bark for realistic text-to-speech
|
{
"login": "xenova",
"id": 26504141,
"node_id": "MDQ6VXNlcjI2NTA0MTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/26504141?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xenova",
"html_url": "https://github.com/xenova",
"followers_url": "https://api.github.com/users/xenova/followers",
"following_url": "https://api.github.com/users/xenova/following{/other_user}",
"gists_url": "https://api.github.com/users/xenova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xenova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xenova/subscriptions",
"organizations_url": "https://api.github.com/users/xenova/orgs",
"repos_url": "https://api.github.com/users/xenova/repos",
"events_url": "https://api.github.com/users/xenova/events{/privacy}",
"received_events_url": "https://api.github.com/users/xenova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] |
[
"Hello, if no one else is working on this, I would love to take a look and try to add this model to HuggingFace! ",
"https://github.com/huggingface/transformers/pull/23375",
"see https://github.com/huggingface/transformers/pull/24086"
] | 1,682 | 1,689 | 1,689 |
CONTRIBUTOR
| null |
### Model description
As stated in their [README](https://github.com/suno-ai/bark/blob/main/README.md):
> Bark is a transformer-based text-to-audio model created by [Suno](https://suno.ai/). Bark can generate highly realistic, multilingual speech as well as other audio - including music, background noise and simple sound effects. The model can also produce nonverbal communications like laughing, sighing and crying. To support the research community, we are providing access to pretrained model checkpoints ready for inference.
Some of their demos are quite amazing (albeit slightly creepy), being able to add "uhms" and "ahhs" in the synthesized audio. For example:
```
Hello, my name is Suno. And, uh — and I like pizza. [laughs]
But I also have other interests such as playing tic tac toe.
```
https://user-images.githubusercontent.com/34592747/238155864-cfa98e54-721c-4b9c-b962-688e09db684f.webm
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
GitHub repo: https://github.com/suno-ai/bark
Author: @gkucsko
Demo: https://huggingface.co/spaces/suno/bark
Model weights: Although not very well documented, [here](https://github.com/suno-ai/bark/blob/2c12023eb22868a633b76357b69d657b374736d9/bark/generation.py#L92-L119) is the portion of the code which links to the model weights. @Vaibhavs10 also looks to have uploaded them to the HF Hub [here](https://huggingface.co/reach-vb/bark-small) 🔥
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23036/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23036/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23035
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23035/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23035/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23035/events
|
https://github.com/huggingface/transformers/pull/23035
| 1,687,607,284 |
PR_kwDOCUB6oc5PVkW9
| 23,035 |
Save the tokenizer and image preprocessor after training a model with the contrastive image-text example
|
{
"login": "regisss",
"id": 15324346,
"node_id": "MDQ6VXNlcjE1MzI0MzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/15324346?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/regisss",
"html_url": "https://github.com/regisss",
"followers_url": "https://api.github.com/users/regisss/followers",
"following_url": "https://api.github.com/users/regisss/following{/other_user}",
"gists_url": "https://api.github.com/users/regisss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/regisss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/regisss/subscriptions",
"organizations_url": "https://api.github.com/users/regisss/orgs",
"repos_url": "https://api.github.com/users/regisss/repos",
"events_url": "https://api.github.com/users/regisss/events{/privacy}",
"received_events_url": "https://api.github.com/users/regisss/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger All tests passed, so I think this one can be merged :slightly_smiling_face: "
] | 1,682 | 1,683 | 1,683 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
When training a model with the contrastive image-text example, only the model is saved (see [here](https://github.com/huggingface/transformers/blob/88399476c3892435395618ed37993176dbb0de73/examples/pytorch/contrastive-image-text/run_clip.py#L512)). As a consequence, when using the trained model to only perform inference with the same script, an error will be raised [here](https://github.com/huggingface/transformers/blob/88399476c3892435395618ed37993176dbb0de73/examples/pytorch/contrastive-image-text/run_clip.py#L324) and [there](https://github.com/huggingface/transformers/blob/88399476c3892435395618ed37993176dbb0de73/examples/pytorch/contrastive-image-text/run_clip.py#L335) because the checkpoint doesn't contain a tokenizer nor a preprocessor.
This PR fixes this issue by saving the tokenizer and the image preprocessor at the end of the training.
To reproduce it, after creating a model following [this section](https://github.com/huggingface/transformers/tree/main/examples/pytorch/contrastive-image-text#create-a-model-from-a-vision-encoder-model-and-a-text-encoder-model), run:
```bash
python examples/pytorch/contrastive-image-text/run_clip.py \
--output_dir ./clip-roberta-finetuned \
--model_name_or_path ./clip-roberta \
--data_dir $PWD/data \
--dataset_name ydshieh/coco_dataset_script \
--dataset_config_name=2017 \
--image_column image_path \
--caption_column caption \
--remove_unused_columns=False \
--do_train \
--per_device_train_batch_size="64" \
--learning_rate="5e-5" \
--overwrite_output_dir
```
And then run:
```bash
python examples/pytorch/contrastive-image-text/run_clip.py \
--output_dir ./clip-roberta-finetuned \
--model_name_or_path ./clip-roberta-finetuned \
--data_dir $PWD/data \
--dataset_name ydshieh/coco_dataset_script \
--dataset_config_name=2017 \
--image_column image_path \
--caption_column caption \
--remove_unused_columns=False \
--do_eval \
--per_device_eval_batch_size="64" \
--overwrite_output_dir
```
which raises the following error:
```
Traceback (most recent call last):
File "run_clip.py", line 540, in <module>
main()
File "run_clip.py", line 325, in main
tokenizer = AutoTokenizer.from_pretrained(
File "/home/ubuntu/workspace/venv/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 718, in from_pretrained
tokenizer_class_py, tokenizer_class_fast = TOKENIZER_MAPPING[type(config)]
File "/home/ubuntu/workspace/venv/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 610, in __getitem__
raise KeyError(key)
KeyError: <class 'transformers.models.vision_text_dual_encoder.configuration_vision_text_dual_encoder.VisionTextDualEncoderConfig'>
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23035/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23035/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23035",
"html_url": "https://github.com/huggingface/transformers/pull/23035",
"diff_url": "https://github.com/huggingface/transformers/pull/23035.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23035.patch",
"merged_at": 1683033797000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23034
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23034/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23034/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23034/events
|
https://github.com/huggingface/transformers/issues/23034
| 1,687,481,650 |
I_kwDOCUB6oc5klO0y
| 23,034 |
Cannot resume FSDP optimizer state
|
{
"login": "qywu",
"id": 18195478,
"node_id": "MDQ6VXNlcjE4MTk1NDc4",
"avatar_url": "https://avatars.githubusercontent.com/u/18195478?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qywu",
"html_url": "https://github.com/qywu",
"followers_url": "https://api.github.com/users/qywu/followers",
"following_url": "https://api.github.com/users/qywu/following{/other_user}",
"gists_url": "https://api.github.com/users/qywu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qywu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qywu/subscriptions",
"organizations_url": "https://api.github.com/users/qywu/orgs",
"repos_url": "https://api.github.com/users/qywu/repos",
"events_url": "https://api.github.com/users/qywu/events{/privacy}",
"received_events_url": "https://api.github.com/users/qywu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @pacman100 ",
"Hello @qywu, indeed, that seems to be the case, as you already have the fix, it would be great if you could raise the PR with the fixes, Thank you!"
] | 1,682 | 1,683 | 1,683 |
CONTRIBUTOR
| null |
This line does not save optimizer state correctly when using FSDP.
https://github.com/huggingface/transformers/blob/88399476c3892435395618ed37993176dbb0de73/src/transformers/trainer.py#L2383
It should use FSDP's full_optim_state_dict to collect optimizer states from different processes.
```python
FSDP.full_optim_state_dict(self.model, self.optimizer)
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23034/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23034/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23033
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23033/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23033/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23033/events
|
https://github.com/huggingface/transformers/issues/23033
| 1,687,442,354 |
I_kwDOCUB6oc5klFOy
| 23,033 |
Trainer defaults to NCCL backend for ddp on windows
|
{
"login": "btrude",
"id": 26334169,
"node_id": "MDQ6VXNlcjI2MzM0MTY5",
"avatar_url": "https://avatars.githubusercontent.com/u/26334169?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/btrude",
"html_url": "https://github.com/btrude",
"followers_url": "https://api.github.com/users/btrude/followers",
"following_url": "https://api.github.com/users/btrude/following{/other_user}",
"gists_url": "https://api.github.com/users/btrude/gists{/gist_id}",
"starred_url": "https://api.github.com/users/btrude/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/btrude/subscriptions",
"organizations_url": "https://api.github.com/users/btrude/orgs",
"repos_url": "https://api.github.com/users/btrude/repos",
"events_url": "https://api.github.com/users/btrude/events{/privacy}",
"received_events_url": "https://api.github.com/users/btrude/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"On main `xpu_backend` (soon to be renamed to `ddp_backend`) lets you pick up the backend you want.",
"Excellent, thank you!"
] | 1,682 | 1,682 | 1,682 |
NONE
| null |
### System Info
- `transformers` version: 4.28.1
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.10.10
- Huggingface_hub version: 0.13.3
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Run the following script natively on windows with >1 nvidia GPU. This example uses jsonl input in the same format as the openai api, eg:
```json
{"prompt": "I wish ddp worked with the trainer in windows", "completion": "it does, you are just doing it wrong!"}
{"prompt": "The moon is made of green cheese", "completion": "ok, but I was asking about the huggingface trainer..."}
```
```py
from datasets import load_dataset
from transformers import AutoConfig
from transformers import AutoModelForCausalLM
from transformers import AutoTokenizer
from transformers import DataCollatorForLanguageModeling
from transformers import Trainer
from transformers import TrainingArguments
def train(
train_file_path,
eval_file_path=None,
name=None,
n_epochs=5,
model_name="EleutherAI/gpt-neo-125m",
use_scheduler=False,
):
tokenizer = AutoTokenizer.from_pretrained(model_name)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "left"
model = AutoModelForCausalLM.from_pretrained(model_name)
print(model.config)
if eval_file_path:
train_dataset = load_dataset("json", data_files=train_file_path)
eval_dataset = load_dataset("json", data_files=eval_file_path)
else:
eval_dataset = load_dataset("json", data_files=train_file_path, split="train[:5%]")
train_dataset = load_dataset("json", data_files=train_file_path, split="train[5%:]")
def tokenize_dataset(entry):
inputs = tokenizer(entry["prompt"] + entry["completion"], return_tensors="pt")
return {
"input_ids": inputs["input_ids"],
"attention_mask": inputs["attention_mask"],
}
n_steps_epoch = len(train_dataset)
train_dataset = train_dataset.map(tokenize_dataset).remove_columns(["prompt", "completion"])
eval_dataset = eval_dataset.map(tokenize_dataset).remove_columns(["prompt", "completion"])
data_collator = DataCollatorForLanguageModeling(tokenizer, mlm=False)
training_args = TrainingArguments(
output_dir=name,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
gradient_accumulation_steps=32,
logging_dir=f"{name}/logs",
logging_steps=16,
evaluation_strategy="steps",
save_steps=n_steps_epoch // 2048,
eval_steps=n_steps_epoch // 1024,
save_total_limit=3,
report_to="tensorboard",
tf32=True,
seed=1679815,
)
training_args.set_optimizer(
"adamw_torch_fused",
learning_rate=1e-4,
beta1=.9,
beta2=.95,
epsilon=1e-8,
weight_decay=.1
)
if use_scheduler:
training_args.set_lr_scheduler(
name="linear",
warmup_steps=250,
num_epochs=n_epochs,
)
print(training_args)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
)
trainer.train()
if __name__ == "__main__":
train("path/to/json/dataset", name="no_gloo")
```
### Expected behavior
Outside of the context of the huggingface trainer I am able to use the gloo backend in conjunction with mpi for distributed training with pytorch using the following setup:
```
def setup_ddp(hps):
if hps.ddp:
port = "29500"
rank = MPI.COMM_WORLD.Get_rank()
world_size = MPI.COMM_WORLD.Get_size()
os.environ["RANK"] = str(rank)
os.environ["WORLD_SIZE"] = str(world_size)
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = port
dist.init_process_group("gloo", rank=rank, world_size=world_size)
group = dist.new_group(ranks=devices())
else:
rank, world_size, group = 0, 1, None
return rank, world_size, group
```
When running the provided training script in linux (via WSL2) with >1 GPU, everything executes as one would expect - but WSL2 is significantly slower than native sadly.
I would expect that the trainer would expose the backend that is used as a variable, but `xpu_backend` does not affect the behavior I am experiencing nor is it immediately clear if this is meant to be configurable as-is. NCCL is not supported in windows on pytorch currently (https://github.com/pytorch/pytorch/issues/89688) and so the trainer should not attempt to default to NCCL unless it is installed/supported by the OS (ie windows should always default to a different, functional backend).
At the very least I would expect a clean error message that explains that DDP is not supported on windows or whatever the actual state of compatibility is. Instead, the script throws warnings about pytorch not being compiled with NCCL support and fails on the first forward pass with inscrutable cuda errors.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23033/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23033/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23032
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23032/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23032/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23032/events
|
https://github.com/huggingface/transformers/pull/23032
| 1,687,192,303 |
PR_kwDOCUB6oc5PUKR4
| 23,032 |
Fix CLAP link across all READMEs
|
{
"login": "ehsanmok",
"id": 6980212,
"node_id": "MDQ6VXNlcjY5ODAyMTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6980212?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ehsanmok",
"html_url": "https://github.com/ehsanmok",
"followers_url": "https://api.github.com/users/ehsanmok/followers",
"following_url": "https://api.github.com/users/ehsanmok/following{/other_user}",
"gists_url": "https://api.github.com/users/ehsanmok/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ehsanmok/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ehsanmok/subscriptions",
"organizations_url": "https://api.github.com/users/ehsanmok/orgs",
"repos_url": "https://api.github.com/users/ehsanmok/repos",
"events_url": "https://api.github.com/users/ehsanmok/events{/privacy}",
"received_events_url": "https://api.github.com/users/ehsanmok/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I'm getting\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/Users/ehsan/workspace/transformers/utils/check_task_guides.py\", line 58, in <module>\r\n \"asr.mdx\": transformers_module.models.auto.modeling_auto.MODEL_FOR_CTC_MAPPING_NAMES,\r\n File \"/Users/ehsan/workspace/transformers/src/transformers/utils/import_utils.py\", line 1150, in __getattr__\r\n raise AttributeError(f\"module {self.__name__} has no attribute {name}\")\r\nAttributeError: module transformers.models.auto has no attribute modeling_auto. Did you mean: 'processing_auto'?\r\n```\r\n\r\nnot sure what's going on so I added it manually to all the `index.mdx`. It seems not all models research paper links have been synced across docs.",
"Thanks again!"
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes CLAP link across all READMEs
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23032/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23032/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23032",
"html_url": "https://github.com/huggingface/transformers/pull/23032",
"diff_url": "https://github.com/huggingface/transformers/pull/23032.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23032.patch",
"merged_at": 1682633222000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23031
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23031/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23031/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23031/events
|
https://github.com/huggingface/transformers/pull/23031
| 1,687,079,905 |
PR_kwDOCUB6oc5PTyYY
| 23,031 |
Add methods to update and verify out_features out_indices
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@ydshieh @sgugger I've updated with the setter tip - all looks a lot tidier! Let me know if the changes are OK. \r\n\r\nIn the spirit of not doing things magically, when setting the `out_features` and `out_indices` should I have a `logger.warning_once` notifying the user the other property is also updated? ",
"Thanks for the iteration. I don't feel strong to `logger.warning_once` when setting one property (as it's mentioned in the docstring), but it's a good thought! Let's see what Sylvain thinks."
] | 1,682 | 1,683 | 1,683 |
COLLABORATOR
| null |
# What does this PR do?
`out_features` and `out_indices` are two parameters which control the behaviour of a backbone. `out_indices` was recently [added as a config argument](https://github.com/huggingface/transformers/pull/22493) for the future addition of timm backbones (otherwise the timm backbone requires loading in, inspecting the feature names, then mapping to equivalent names in transfomers).
It's necessary that `out_features` and `out_indices` are consistent i.e. that they both map the same stage names. Otherwise there is conflicting sources of truth in the config. At the moment, `out_features` and `out_indices` are set and verified [within the config](https://github.com/huggingface/transformers/blob/88399476c3892435395618ed37993176dbb0de73/src/transformers/models/swin/configuration_swin.py#L162-L189).
For backwards compatibility, backbone models can be created even if their config only has `out_features` set e.g. here for [SwinBackbone](https://github.com/huggingface/transformers/blob/88399476c3892435395618ed37993176dbb0de73/src/transformers/models/swin/modeling_swin.py#L1268).
However, it's possible to modify the config after creation e.g. [like here in the DINAT tests](https://github.com/huggingface/transformers/blob/88399476c3892435395618ed37993176dbb0de73/tests/models/dinat/test_modeling_dinat.py#L178), resulting in a mismatch between `out_features` and `out_indices`.
This PR resolves two issues by creating a single backbone utils module.
1. Ensures `out_features` or `out_indices` attribute can only be updated using `set_out_features` and `set_out_indices` methods respectively. These perform argument checks and updates the complementary feature.
2. Remove repeated `out_features` and `out_indices` getting and verification logic between configurations and backbone models
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23031/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23031/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23031",
"html_url": "https://github.com/huggingface/transformers/pull/23031",
"diff_url": "https://github.com/huggingface/transformers/pull/23031.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23031.patch",
"merged_at": 1683191707000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23030
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23030/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23030/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23030/events
|
https://github.com/huggingface/transformers/pull/23030
| 1,687,045,621 |
PR_kwDOCUB6oc5PTq95
| 23,030 |
GPT2ForQuestionAnswering
|
{
"login": "peter-sk",
"id": 6168908,
"node_id": "MDQ6VXNlcjYxNjg5MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/6168908?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/peter-sk",
"html_url": "https://github.com/peter-sk",
"followers_url": "https://api.github.com/users/peter-sk/followers",
"following_url": "https://api.github.com/users/peter-sk/following{/other_user}",
"gists_url": "https://api.github.com/users/peter-sk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/peter-sk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/peter-sk/subscriptions",
"organizations_url": "https://api.github.com/users/peter-sk/orgs",
"repos_url": "https://api.github.com/users/peter-sk/repos",
"events_url": "https://api.github.com/users/peter-sk/events{/privacy}",
"received_events_url": "https://api.github.com/users/peter-sk/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@ArthurZucker @sgugger @younesbelkada The next one is ready for review.\r\n\r\nThis is a bit funny. The tests are fine, and it runs great on one of my 4x V100 machines. But on another machine, I get this funny error:\r\n\r\n/home/jps/anaconda3/envs/scandeval/lib/python3.9/site-packages/transformers/trainer.py:375\r\n/home/jps/anaconda3/envs/scandeval/lib/python3.9/site-packages/torch/nn/modules/module.py:1269\r\nAttributeError: 'GPT2ForQuestionAnswering' object has no attribute 'model_parallel'\r\n\r\nAny ideas? I though model_parallel was legacy and not needed. Should I add to be on the safe side? torch version is torch==1.13.1.",
"GPT2 is an old model so it might still be checking for `model_parallel`, which in this case has to be added! LMK if this fixes the issues ",
"@ArthurZucker ready for review!",
"cc @younesbelkada since Arthur is on vacation."
] | 1,682 | 1,683 | 1,683 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23030/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23030/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23030",
"html_url": "https://github.com/huggingface/transformers/pull/23030",
"diff_url": "https://github.com/huggingface/transformers/pull/23030.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23030.patch",
"merged_at": 1683033947000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23029
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23029/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23029/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23029/events
|
https://github.com/huggingface/transformers/pull/23029
| 1,687,009,577 |
PR_kwDOCUB6oc5PTjMY
| 23,029 |
Update `BridgeTowerModelTester`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Remark: with lager model (but not too large), we get\r\n\r\n```bash\r\nFAILED tests/models/bridgetower/test_modeling_bridgetower.py::BridgeTowerModelTest::test_model_parallelism - RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1!\r\n```\r\n\r\nBetter to check this separately.\r\n\r\n---------------------\r\n\r\nHere is the full log\r\n\r\n```bash\r\n> new_output = new_model(**inputs_dict_class)\r\n\r\ntests/test_modeling_common.py:2616: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1501: in _call_impl\r\n return forward_call(*args, **kwargs)\r\n/usr/local/lib/python3.8/dist-packages/accelerate/hooks.py:165: in new_forward\r\n output = old_forward(*args, **kwargs)\r\nsrc/transformers/models/bridgetower/modeling_bridgetower.py:1423: in forward\r\n image_embeds = self.vision_model.visual.transformer.resblocks[i](image_embeds).type(\r\n/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1501: in _call_impl\r\n return forward_call(*args, **kwargs)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nself = BridgeTowerResidualAttention(\r\n (attn): MultiheadAttention(\r\n (out_proj): NonDynamicallyQuantizableLinear(in_feature...ar(in_features=2048, out_features=512, bias=True)\r\n )\r\n (ln_2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)\r\n)\r\nhidden_state = tensor([[[ 0.5531, 0.0555, -0.0248, ..., 0.2110, -0.0403, 0.0487]],\r\n\r\n [[ 0.2963, -0.1709, 0.0074, ..., 0... [[ 0.3324, -0.0536, -0.0069, ..., 0.0911, -0.0565, -0.2751]]],\r\n device='cuda:1', grad_fn=<ViewBackward0>)\r\nattention_mask = None\r\n\r\n def forward(self, hidden_state: torch.Tensor, attention_mask: torch.Tensor = None):\r\n residual_state = hidden_state + self.attention(self.ln_1(hidden_state), attention_mask)\r\n hidden_state = self.ln_2(residual_state)\r\n for _, layer in self.mlp.items():\r\n hidden_state = layer(hidden_state)\r\n> hidden_state = residual_state + hidden_state\r\nE RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1!\r\n\r\nsrc/transformers/models/bridgetower/modeling_bridgetower.py:237: RuntimeError\r\n================================================================================================== warnings summary ==================================================================================================\r\n../usr/local/lib/python3.8/dist-packages/detectron2/data/transforms/transform.py:46\r\n /usr/local/lib/python3.8/dist-packages/detectron2/data/transforms/transform.py:46: DeprecationWarning: LINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use BILINEAR or Resampling.BILINEAR instead.\r\n def __init__(self, src_rect, output_size, interp=Image.LINEAR, fill=0):\r\n\r\n-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html\r\n============================================================================================== short test summary info ===============================================================================================\r\nFAILED tests/models/bridgetower/test_modeling_bridgetower.py::BridgeTowerModelTest::test_model_parallelism - RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1!\r\n```"
] | 1,682 | 1,682 | 1,682 |
COLLABORATOR
| null |
# What does this PR do?
Update `BridgeTowerModelTester` to use small values for config.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23029/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23029/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23029",
"html_url": "https://github.com/huggingface/transformers/pull/23029",
"diff_url": "https://github.com/huggingface/transformers/pull/23029.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23029.patch",
"merged_at": 1682612777000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23028
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23028/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23028/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23028/events
|
https://github.com/huggingface/transformers/pull/23028
| 1,686,879,175 |
PR_kwDOCUB6oc5PTGrd
| 23,028 |
[MEGA] nit size test
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,682 | 1,682 | 1,682 |
COLLABORATOR
| null |
# What does this PR do?
Adresses #23025, the input_shape should be tested, not input ids because it might be None.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23028/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23028/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23028",
"html_url": "https://github.com/huggingface/transformers/pull/23028",
"diff_url": "https://github.com/huggingface/transformers/pull/23028.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23028.patch",
"merged_at": 1682605261000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23027
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23027/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23027/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23027/events
|
https://github.com/huggingface/transformers/pull/23027
| 1,686,822,704 |
PR_kwDOCUB6oc5PS6SJ
| 23,027 |
remove tee
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Example test that does not have tee: https://app.circleci.com/pipelines/github/huggingface/transformers/63142/workflows/4754f622-1e12-40bb-be2f-0dcb363a216b/jobs/778648/steps?invite=true#step-111-6582 \n\n",
"CI outputs without tee: \r\n\r\nWith tee: \r\n<img width=\"995\" alt=\"image\" src=\"https://user-images.githubusercontent.com/48595927/234875612-209fd9fe-8961-4f25-abb4-11551402ee73.png\">\r\n",
"Other files are still available: \r\n[~/transformers/installed.txt](https://output.circle-artifacts.com/output/job/c12dd81b-f182-4b6a-bdaf-051887b78948/artifacts/0/~/transformers/installed.txt)\r\n[~/transformers/reports/tests_onnx/durations.txt](https://output.circle-artifacts.com/output/job/c12dd81b-f182-4b6a-bdaf-051887b78948/artifacts/0/~/transformers/reports/tests_onnx/durations.txt)\r\n[~/transformers/reports/tests_onnx/stats.txt](https://output.circle-artifacts.com/output/job/c12dd81b-f182-4b6a-bdaf-051887b78948/artifacts/0/~/transformers/reports/tests_onnx/stats.txt)\r\n[~/transformers/reports/tests_onnx/summary_short.txt](https://output.circle-artifacts.com/output/job/c12dd81b-f182-4b6a-bdaf-051887b78948/artifacts/0/~/transformers/reports/tests_onnx/summary_short.txt)\r\n[~/transformers/reports/tests_onnx/warnings.txt](https://output.circle-artifacts.com/output/job/c12dd81b-f182-4b6a-bdaf-051887b78948/artifacts/0/~/transformers/reports/tests_onnx/warnings.txt) ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23027). All of your documentation changes will be reflected on that endpoint.",
"> No one looks at the artifacts\r\n\r\nSpeak for yourself ;-) , I only look at artifacts since it's impossible to get the traceback in the output. I do not look at the output artifact however, so that change wouldn't impact how I use the reports. However I'm not alone so make sure @LysandreJik @amyeroberts and @ydshieh all agree before merging this.",
"(The full traceback can still be seen!)",
"It seems expanding the run test step is still fast, and personally I don't read `test_output.txt` but just the reports given by `--make_reports`, I am OK for this change.\r\n\r\nOne thing remaining is to remove the upload artifact step if we don't produce it. So far, we get\r\n\r\n```bash\r\n\r\nUploading /home/circleci/transformers/tests_output.txt to ~/transformers/tests_output.txt\r\n No artifact files found at /home/circleci/transformers/tests_output.txt\r\nTotal size uploaded: 0 B\r\n```",
"I use the artefacts all the time :D! \r\n\r\nI think it's fine for `test_outputs.txt` to go though, as I rarely look at it and I think all the other info can be found in the other .txt files 👍 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,682 | 1,685 | 1,685 |
COLLABORATOR
| null |
# What does this PR do?
Remove the piping with `| tee` when running pytest.
1. No one looks at the artifacts, and `tee` makes the output un-readable, color less etc.
2. I checked that even if you remove this, the outputs are still visible because Circle CI uses a custom file system handling for this and not the `output.txt`.
3. Subtests are omitted from the output this it does not include everything
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23027/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23027/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23027",
"html_url": "https://github.com/huggingface/transformers/pull/23027",
"diff_url": "https://github.com/huggingface/transformers/pull/23027.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23027.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23026
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23026/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23026/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23026/events
|
https://github.com/huggingface/transformers/pull/23026
| 1,686,664,687 |
PR_kwDOCUB6oc5PSXk7
| 23,026 |
[i18n-KO] Translated video_classification.mdx to Korean
|
{
"login": "kihoon71",
"id": 75935546,
"node_id": "MDQ6VXNlcjc1OTM1NTQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/75935546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kihoon71",
"html_url": "https://github.com/kihoon71",
"followers_url": "https://api.github.com/users/kihoon71/followers",
"following_url": "https://api.github.com/users/kihoon71/following{/other_user}",
"gists_url": "https://api.github.com/users/kihoon71/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kihoon71/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kihoon71/subscriptions",
"organizations_url": "https://api.github.com/users/kihoon71/orgs",
"repos_url": "https://api.github.com/users/kihoon71/repos",
"events_url": "https://api.github.com/users/kihoon71/events{/privacy}",
"received_events_url": "https://api.github.com/users/kihoon71/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hey! Sorry for the long delay. There seems to be 2 suggestions not adresses, should we wait for these? 🤗 ",
"@ArthurZucker the suggestions that i didn't accept were about same sentences or ealier version of our glossary so you don't need to wait for the other suggestions to be accepted!! Thank you!!"
] | 1,682 | 1,685 | 1,685 |
CONTRIBUTOR
| null |
# What does this PR do?
Translated the video_classification.mdx file of the documentation to Korean.
Thank you in advance for your review.
## Before reviewing
- [x] Check for missing / redundant translations (번역 누락/중복 검사)
- [x] Grammar Check (맞춤법 검사)
- [x] Review or Add new terms to glossary (용어 확인 및 추가)
- [x] Check Inline TOC (e.g. [[lowercased-header]])
- [x] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review(initial)?
Team PseudoLab, may you please review this PR? @0525hhgus, @HanNayeoniee, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review(initial)?
@sgugger, @ArthurZucker, @eunseojo May you please review this PR?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23026/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23026/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23026",
"html_url": "https://github.com/huggingface/transformers/pull/23026",
"diff_url": "https://github.com/huggingface/transformers/pull/23026.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23026.patch",
"merged_at": 1685453324000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23025
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23025/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23025/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23025/events
|
https://github.com/huggingface/transformers/issues/23025
| 1,686,602,572 |
I_kwDOCUB6oc5kh4NM
| 23,025 |
MegaModel not usable with chunking if input_embeds are used instead of input_ids
|
{
"login": "jganitzer",
"id": 88656546,
"node_id": "MDQ6VXNlcjg4NjU2NTQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/88656546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jganitzer",
"html_url": "https://github.com/jganitzer",
"followers_url": "https://api.github.com/users/jganitzer/followers",
"following_url": "https://api.github.com/users/jganitzer/following{/other_user}",
"gists_url": "https://api.github.com/users/jganitzer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jganitzer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jganitzer/subscriptions",
"organizations_url": "https://api.github.com/users/jganitzer/orgs",
"repos_url": "https://api.github.com/users/jganitzer/repos",
"events_url": "https://api.github.com/users/jganitzer/events{/privacy}",
"received_events_url": "https://api.github.com/users/jganitzer/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Good catch, the error is pretty straightforward, we should check with either size. "
] | 1,682 | 1,682 | 1,682 |
NONE
| null |
### System Info
When input_embeds are used instead of input_ids then input_ids = None.
Therefore this error happens in modeling_mega.py:
-> 1544 if self.config.use_chunking and (input_ids.size(1) > self.config.chunk_size):
1545 print(input_ids.size(1))
1546 if input_ids.size(1) % self.config.chunk_size != 0:
AttributeError: 'NoneType' object has no attribute 'size'
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
from transformers import MegaModel, MegaConfig
class MegaRegressor(nn.Module):
def __init__(self, input_dim=4, hidden_dim=4, num_layers=2, num_heads=4):
super().__init__()
config = MegaConfig(
vocab_size=4,
hidden_size=hidden_dim,
num_attention_heads=num_heads,
intermediate_size=4*hidden_dim,
max_positions=50000,
num_hidden_layers=num_layers,
output_attentions=False,
return_dict=True,
use_chunking=True,
chunk_size = 100
)
self.encoder = MegaModel(config)
self.fc = nn.Linear(config.hidden_size, 1)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
def forward(self, inputs_embeds):
out = self.encoder(inputs_embeds=inputs_embeds) # (batch_size, seq_length, hidden_size)
output = out['last_hidden_state']
output = torch.mean(output, dim=1)
output = self.dropout(output)
logits = self.fc(output).squeeze() # (batch_size, 1)
return logits
model = MegaRegressor().to(device)
#print(data.shape) --> torch.Size([8, 49800, 4])
output = model(data)
### Expected behavior
Model should not raise AttributeError: 'NoneType' object has no attribute 'size'
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23025/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23025/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23024
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23024/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23024/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23024/events
|
https://github.com/huggingface/transformers/pull/23024
| 1,686,525,366 |
PR_kwDOCUB6oc5PR5UC
| 23,024 |
🚨🚨🚨 [`Blip`] remove labels masking
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
# What does this PR do?
Addresses https://github.com/huggingface/transformers/pull/23004#issuecomment-1523776082
This PR aims to harmonize the training procedure for most of recent additions in `transformers`. It should be users' responsibility to fill_mask the padding tokens of the labels with the correct value. This PR addresses the issue that was raised by other architectures such as Luke or Pix2Struct
However I realize that even if this patch is applied, we still mask fill the labels with -100 [here](https://github.com/huggingface/transformers/blob/9435cc6670b7b8656b33e8ff28d3bbe9bafbca9d/src/transformers/models/blip/modeling_blip.py#L1133), similarly as in [T5](https://github.com/huggingface/transformers/blob/9435cc6670b7b8656b33e8ff28d3bbe9bafbca9d/src/transformers/models/t5/modeling_t5.py#L868)
I would love your feedback on this, @amyeroberts !
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23024/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23024/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23024",
"html_url": "https://github.com/huggingface/transformers/pull/23024",
"diff_url": "https://github.com/huggingface/transformers/pull/23024.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23024.patch",
"merged_at": 1682699091000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23023
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23023/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23023/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23023/events
|
https://github.com/huggingface/transformers/pull/23023
| 1,686,451,944 |
PR_kwDOCUB6oc5PRpoS
| 23,023 |
[`Pix2Struct`] Fix pix2struct doctest
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes Pix2Struct doctest
Link to failing job: https://github.com/huggingface/transformers/actions/runs/4815336726/jobs/8573921590
With https://github.com/huggingface/transformers/pull/23004 being merged, the label smoothing of the loss function has been removed. Therefore the expected value of the loss function changed, leading to the failing doctest.
cc @ydshieh
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23023/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23023/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23023",
"html_url": "https://github.com/huggingface/transformers/pull/23023",
"diff_url": "https://github.com/huggingface/transformers/pull/23023.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23023.patch",
"merged_at": 1682588883000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23022
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23022/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23022/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23022/events
|
https://github.com/huggingface/transformers/pull/23022
| 1,686,421,542 |
PR_kwDOCUB6oc5PRjHx
| 23,022 |
Fix the expected error in `test_offline_mode_pipeline_exception`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This is extremely weird. The error is issued [here](https://github.com/huggingface/transformers/blob/9435cc6670b7b8656b33e8ff28d3bbe9bafbca9d/src/transformers/pipelines/__init__.py#L430) and is only on one line.",
"Yeah, but this is the log (see the last line)\r\n\r\n-----------------------------\r\n\r\n2023-04-26T19:08:23.5081961Z E AssertionError: 'You cannot infer task automatically within `pipeline` when using offline mode' not found in '2023-04-26 19:07:56.056711: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA\\nTo enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.\\n2023-04-26 19:07:57.005479: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library \\'libnvinfer.so.7\\'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/nvidia/lib:/usr/local/nvidia/lib64\\n2023-04-26 19:07:57.005598: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library \\'libnvinfer_plugin.so.7\\'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/nvidia/lib:/usr/local/nvidia/lib64\\n2023-04-26 19:07:57.005612: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.\\n╭───────────────────── Traceback (most recent call last) ──────────────────────╮\\n│ <string>:11 in <module> │\\n│ /transformers/src/transformers/pipelines/__init__.py:726 in pipeline │\\n│ │\\n│ 723 │ │ │ │ \"Inferring the task automatically requires to check th │\\n│ 724 │ │ │ │ f\"{model} is not a valid model_id.\" │\\n│ 725 │ │ │ ) │\\n│ ❱ 726 │ │ task = get_task(model, use_auth_token) │\\n│ 727 │ │\\n│ 728 │ # Retrieve the task │\\n│ 729 │ if task in custom_tasks: │\\n│ │\\n│ /transformers/src/transformers/pipelines/__init__.py:430 in get_task │\\n│ │\\n│ 427 │\\n│ 428 def get_task(model: str, use_auth_token: Optional[str] = None) -> str: │\\n│ 429 │ if is_offline_mode(): │\\n│ ❱ 430 │ │ raise RuntimeError(\"You cannot infer task automatically within │\\n│ 431 │ try: │\\n│ 432 │ │ info = model_info(model, token=use_auth_token) │\\n│ 433 │ except Exception as e: │\\n╰──────────────────────────────────────────────────────────────────────────────╯\\nRuntimeError: You cannot infer task automatically within `pipeline` when using \\noffline mode\\n'\r\n"
] | 1,682 | 1,682 | 1,682 |
COLLABORATOR
| null |
# What does this PR do?
The expected error becomes `RuntimeError: You cannot infer task automatically within `pipeline` when using \noffline mode\n` since April 18, i.e. with 2 extra `\n`. It's not very clear where this change comes from, probably just the format thing from the `subprocess`.
Strangely, if I run the command in that test like below, there is no extra newline.
```bash
HF_HOME=/mnt/cache TRANSFORMERS_IS_CI=yes TRANSFORMERS_OFFLINE=1 python3 temp.py
```
with temp.py
```python
from transformers import pipeline
mname = "hf-internal-testing/tiny-random-bert"
pipe = pipeline(model=mname)
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23022/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23022/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23022",
"html_url": "https://github.com/huggingface/transformers/pull/23022",
"diff_url": "https://github.com/huggingface/transformers/pull/23022.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23022.patch",
"merged_at": 1682598125000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23021
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23021/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23021/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23021/events
|
https://github.com/huggingface/transformers/pull/23021
| 1,686,406,584 |
PR_kwDOCUB6oc5PRf-x
| 23,021 |
Adding XLA support for greedy sampling
|
{
"login": "aashiqmuhamed",
"id": 17514579,
"node_id": "MDQ6VXNlcjE3NTE0NTc5",
"avatar_url": "https://avatars.githubusercontent.com/u/17514579?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aashiqmuhamed",
"html_url": "https://github.com/aashiqmuhamed",
"followers_url": "https://api.github.com/users/aashiqmuhamed/followers",
"following_url": "https://api.github.com/users/aashiqmuhamed/following{/other_user}",
"gists_url": "https://api.github.com/users/aashiqmuhamed/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aashiqmuhamed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aashiqmuhamed/subscriptions",
"organizations_url": "https://api.github.com/users/aashiqmuhamed/orgs",
"repos_url": "https://api.github.com/users/aashiqmuhamed/repos",
"events_url": "https://api.github.com/users/aashiqmuhamed/events{/privacy}",
"received_events_url": "https://api.github.com/users/aashiqmuhamed/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23021). All of your documentation changes will be reflected on that endpoint.",
"Hey @aashiqmuhamed! In general the PR looks positive, but before diving deeper, let us (`transformers` team) have a discussion about adding this type of PRs (new HW-oriented optimizations). `generate` is very complex ATM, and we want to make it more manageable -- if we merge all PRs of this kind in the current `generate` state, it will become maintenance hell for everyone. \r\n\r\nI will get back to you in this PR within a week.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,682 | 1,685 | 1,685 |
CONTRIBUTOR
| null |
# What does this PR do?
This CR enables greedy sampling in model.generate on XLA devices such as Trainium and TPU. This addresses issues such as https://github.com/huggingface/transformers/issues/18661 and https://github.com/huggingface/transformers/issues/12322.
The implementation is inspired by the corresponding Tensorflow generate function in transformers. The CR uses conditional statements to support greedy sampling, and the user can switch between GPU implementation or XLA implementation depending on the state of `is_torch_tpu_available`. The CR also implements kv-cache functionality that is XLA compatible.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@gante Feel free to suggest appropriate tests/refactors for this PR. We have tested generation locally using a trn1.32xlarge instance and matched Rouge scores for T5-small summarization.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23021/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23021/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23021",
"html_url": "https://github.com/huggingface/transformers/pull/23021",
"diff_url": "https://github.com/huggingface/transformers/pull/23021.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23021.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23020
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23020/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23020/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23020/events
|
https://github.com/huggingface/transformers/pull/23020
| 1,686,378,319 |
PR_kwDOCUB6oc5PRZ4h
| 23,020 |
added type hints for blip_text model
|
{
"login": "iamarunbrahma",
"id": 6504730,
"node_id": "MDQ6VXNlcjY1MDQ3MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6504730?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iamarunbrahma",
"html_url": "https://github.com/iamarunbrahma",
"followers_url": "https://api.github.com/users/iamarunbrahma/followers",
"following_url": "https://api.github.com/users/iamarunbrahma/following{/other_user}",
"gists_url": "https://api.github.com/users/iamarunbrahma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iamarunbrahma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iamarunbrahma/subscriptions",
"organizations_url": "https://api.github.com/users/iamarunbrahma/orgs",
"repos_url": "https://api.github.com/users/iamarunbrahma/repos",
"events_url": "https://api.github.com/users/iamarunbrahma/events{/privacy}",
"received_events_url": "https://api.github.com/users/iamarunbrahma/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Error logs suggest that to use ```pip install \"black[jupyter]\"``` but unable to understand what to do after that. @Rocketknight1 could you suggest how to fix test failures?",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23020). All of your documentation changes will be reflected on that endpoint.",
"Hi @iamarunbrahma, `black` is a code formatting tool. After you've installed it, run `make style` or `make fixup` in the `transformers` directory. This should reformat your file for you and get the tests in the CI to pass.",
"@Rocketknight1 getting this error while running ```make fixup```:\r\n```\r\n/bin/sh: line 3: black: command not found\r\n/bin/sh: line 4: ruff: command not found\r\nmake: *** [Makefile:10: modified_only_fixup] Error 127\r\n```\r\n and while running ```make style```:\r\n```\r\nmake: black: No such file or directory\r\nmake: *** [Makefile:68: style] Error 127\r\n```\r\nI have installed both ```black``` and ```ruff```",
"@iamarunbrahma looks like `black` wasn't installed after all! The easiest way to get it is to `cd` to the `transformers` source directory you're working on and `pip install .[quality]`. You can also just type `pip install transformers[quality]` anywhere, but this may get slightly older versions of the code quality tools (it's usually fine though).\r\n\r\nOnce the tools you need are installed, `make fixup` or `make style` should work."
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
# What does this PR do?
Added type hints for ```blip_text``` pytorch model as tasked in #16059
@Rocketknight1 Could you review this?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23020/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23020/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23020",
"html_url": "https://github.com/huggingface/transformers/pull/23020",
"diff_url": "https://github.com/huggingface/transformers/pull/23020.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23020.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23019
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23019/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23019/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23019/events
|
https://github.com/huggingface/transformers/issues/23019
| 1,686,366,821 |
I_kwDOCUB6oc5kg-pl
| 23,019 |
Error in get embedding_size.
|
{
"login": "enze5088",
"id": 14285786,
"node_id": "MDQ6VXNlcjE0Mjg1Nzg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14285786?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enze5088",
"html_url": "https://github.com/enze5088",
"followers_url": "https://api.github.com/users/enze5088/followers",
"following_url": "https://api.github.com/users/enze5088/following{/other_user}",
"gists_url": "https://api.github.com/users/enze5088/gists{/gist_id}",
"starred_url": "https://api.github.com/users/enze5088/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/enze5088/subscriptions",
"organizations_url": "https://api.github.com/users/enze5088/orgs",
"repos_url": "https://api.github.com/users/enze5088/repos",
"events_url": "https://api.github.com/users/enze5088/events{/privacy}",
"received_events_url": "https://api.github.com/users/enze5088/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @stas00 ",
"With zero-3 outside of fwd/bwd logic where this is done automatically you need to manually gather the sharded model's weights that you need. \r\n\r\nPlease see:\r\nhttps://huggingface.co/docs/transformers/main/main_classes/deepspeed#gathering-parameters\r\n\r\nAnd you will find several examples in our code, e.g.:\r\n\r\nhttps://github.com/huggingface/transformers/blob/88399476c3892435395618ed37993176dbb0de73/src/transformers/modeling_utils.py#L1455-L1456"
] | 1,682 | 1,683 | 1,683 |
CONTRIBUTOR
| null |
### System Info
- transformers version: 4.28.1
- deepspeed 0.8.3
In [examples](https://github.com/huggingface/transformers/tree/main/examples)/[pytorch](https://github.com/huggingface/transformers/tree/main/examples/pytorch)/[language-modeling](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling)/[run_clm.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py)
when I set deepspeed config, I get an embedding_size as 0.
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
embedding_size = model.get_input_embeddings().weight.shape[0]
if len(tokenizer) > embedding_size:
model.resize_token_embeddings(len(tokenizer))
```
At this point, the value of embedding_size obtained is 0, which may cause an error in code. Maybe it's because using deepspeed config.
```
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupDecayLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto",
"total_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": "auto"
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
### Expected behavior
Gets that the embedding size is not equal to zero
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23019/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23019/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23018
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23018/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23018/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23018/events
|
https://github.com/huggingface/transformers/issues/23018
| 1,686,193,365 |
I_kwDOCUB6oc5kgUTV
| 23,018 |
Parameter at index 195 has been marked as ready twice.
|
{
"login": "skye95git",
"id": 41561936,
"node_id": "MDQ6VXNlcjQxNTYxOTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/41561936?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/skye95git",
"html_url": "https://github.com/skye95git",
"followers_url": "https://api.github.com/users/skye95git/followers",
"following_url": "https://api.github.com/users/skye95git/following{/other_user}",
"gists_url": "https://api.github.com/users/skye95git/gists{/gist_id}",
"starred_url": "https://api.github.com/users/skye95git/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/skye95git/subscriptions",
"organizations_url": "https://api.github.com/users/skye95git/orgs",
"repos_url": "https://api.github.com/users/skye95git/repos",
"events_url": "https://api.github.com/users/skye95git/events{/privacy}",
"received_events_url": "https://api.github.com/users/skye95git/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"There is little we can do to help without seeing a full reproducer.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Got exact same bug when gradient_checkpointing_enable()\r\n ",
"Are you using DDP?\r\n\r\nI am using DDP on two GPUs:\r\n\r\npython -m torch.distributed.run --nproc_per_node 2 run_audio_classification.py\r\n\r\n(run because launch fails)\r\n\r\nAll the rest being equal facebook/wav2vec2-base works if gradient_checkpointing is set to True, however, the large model crashes unless the option it is either set to False or removed.\r\n\r\ngradient_checkpointing works for both models if using a single GPU, so the issue seems to be DDP-related.\r\n\r\nThis seems to come from:\r\n\r\nhttps://github.com/pytorch/pytorch/blob/main/torch/csrc/distributed/c10d/reducer.cpp\r\n\r\n",
"The problem may be that when the trainer is invoked from torchrun is setting find_unused_parameters to True for all devices, when, apparently, it should only do it for the first one:\r\n\r\nhttps://discuss.pytorch.org/t/finding-the-cause-of-runtimeerror-expected-to-mark-a-variable-ready-only-once/124428/3\r\n\r\nAnd the reason why the base model works is because that option can be set to False. However, for the large model it has to be True.\r\n\r\nThe solution would be changing the way in which that argument is parsed.\r\n\r\n",
"Thank you @mirix , Making `ddp_find_unused_parameters=False` in Trainer solved this issue for me.",
"if you use `enable_gradient_checkpointing()` you can now overcome this issue by passing `gradient_checkpointing_kwargs={\"use_reentrant\": False}`\r\n\r\n```python\r\nmodel.enable_gradient_checkpointing(gradient_checkpointing_kwargs={\"use_reentrant\": False})\r\n```"
] | 1,682 | 1,699 | 1,685 |
NONE
| null |
### System Info
- `transformers` version: 4.28.0
- Platform: Linux-5.4.0-122-generic-x86_64-with-glibc2.31
- Python version: 3.9.12
- Huggingface_hub version: 0.13.4
- Safetensors version: not installed
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I retrained Roberta on my own corpus with the MLM task. I set `model.gradient_checkpointing_enable()` to save memory.
```python
model = RobertaModel.from_pretrained(model_name_or_path,config=config)
model.gradient_checkpointing_enable() # Activate gradient checkpointing
model = Model(model,config,tokenizer,args)
```
My model:
```python
class Model(nn.Module):
def __init__(self, model,config,tokenizer,args):
super(Model, self).__init__()
self.encoder = model
self.config = config
self.tokenizer = tokenizer
self.args = args
self.lm_head = nn.Linear(config.hidden_size,config.vocab_size)
self.lm_head.weight = self.encoder.embeddings.word_embeddings.weight
self.register_buffer(
"bias", torch.tril(torch.ones((args.block_size, args.block_size), dtype=torch.uint8)).view(1, args.block_size, args.block_size)
)
def forward(self, mlm_ids):
...
```
There is an error:
```
RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parame
ter outside the `forward` function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes. o
r try to use _set_static_graph() as a workaround if this module graph does not change during training loop.2) Reused parameters in multipl
e reentrant backward passes. For example, if you use multiple `checkpoint` functions to wrap the same part of your model, it would result
in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple
times. DDP does not support such use cases in default. You can try to use _set_static_graph() as a workaround if your module graph does n
ot change over iterations.
Parameter at index 195 with name encoder.encoder.layer.11.output.LayerNorm.weight has been marked as ready twice. This means that multiple
autograd engine hooks have fired for this particular parameter during this iteration.
```
If I get rid of this line of code:`model.gradient_checkpointing_enable()`, it is ok. Why?
### Expected behavior
I want to pre-train with `gradient_checkpointing`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23018/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23018/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23017
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23017/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23017/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23017/events
|
https://github.com/huggingface/transformers/issues/23017
| 1,686,150,972 |
I_kwDOCUB6oc5kgJ88
| 23,017 |
model generate with different batch size but get different results
|
{
"login": "Alwin4Zhang",
"id": 33918902,
"node_id": "MDQ6VXNlcjMzOTE4OTAy",
"avatar_url": "https://avatars.githubusercontent.com/u/33918902?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Alwin4Zhang",
"html_url": "https://github.com/Alwin4Zhang",
"followers_url": "https://api.github.com/users/Alwin4Zhang/followers",
"following_url": "https://api.github.com/users/Alwin4Zhang/following{/other_user}",
"gists_url": "https://api.github.com/users/Alwin4Zhang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Alwin4Zhang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Alwin4Zhang/subscriptions",
"organizations_url": "https://api.github.com/users/Alwin4Zhang/orgs",
"repos_url": "https://api.github.com/users/Alwin4Zhang/repos",
"events_url": "https://api.github.com/users/Alwin4Zhang/events{/privacy}",
"received_events_url": "https://api.github.com/users/Alwin4Zhang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @gante",
"> cc @gante\r\n\r\nthks to reply.I find some things about this stranger phenomenon.When doing generation,the second time it started to be different.The pic below is the beam_new_tokens results when using `batch size = 1` and `batch_size = 2`.\r\n\r\n<img width=\"1975\" alt=\"image\" src=\"https://user-images.githubusercontent.com/33918902/235030665-5fde95a5-f22c-436b-afcc-9e9953d95b3e.png\">\r\n\r\n\r\ndebug line by line,I find that something about the torch.multinomial,after this operator,results begin to be different",
"Hey @Alwin4Zhang \r\n\r\nWhen you use `do_sample=True`, the results will be different every time you call `generate` :) Read [our blog post on text generation](https://huggingface.co/blog/how-to-generate) and [our docs](https://huggingface.co/docs/transformers/generation_strategies) for further information.",
"> Hey @Alwin4Zhang\r\n> \r\n> When you use `do_sample=True`, the results will be different every time you call `generate` :) Read [our blog post on text generation](https://huggingface.co/blog/how-to-generate) and [our docs](https://huggingface.co/docs/transformers/generation_strategies) for further information.\r\n\r\nThks 4 reply.It's weird that it's the same when I only use `do_sample=True` + `top_k` + `top_p` with different batch_size,I'm sure that manual_seed is the same.But add `beam_search` arg,the strange things above will happen.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@gante Facing the same issue even when `do_sample = False`\r\n\r\n[Colab Notebook replicating this issue](https://colab.research.google.com/drive/1et5wYV25Bv8miAx9T8ijJ4trpTV2QPGh?usp=sharing)",
"Hey @varadhbhatnagar 👋 \r\n\r\nTo be more specific, batching is not entirely innocuous on the output. The final result depends on the order of operations with FP16 (and other precisions), and batching changes the order of operations, meaning that the model outputs will see tiny fluctuations. There is nothing we can do to remedy this effect other than increasing the precision (e.g. to FP32).\r\n\r\nThese fluctuations often cause no change in the model output with `do_sample = False` -- unless the two most likely tokens have very similar probabilities. This may happen with some frequency when you're using a model with out of distributions inputs, such as using a code model with a non-code input (as seen in your colab) :)"
] | 1,682 | 1,690 | 1,685 |
NONE
| null |
### System Info
I'm using MT5ForConditionalGeneration by transformers to generate summaries,but when I use the arguments below,I will get different results when using beam search + do_sample + top_k + top_p。Only use beam search or do_sample can not cause this phenomenon。But why?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```
self.model = MT5ForConditionalGeneration.from_pretrained(model_dir)
results = conditional_generation_summarizer.batch_summarize(
documents=temp,
# max_length=MAX_LENGTH,
# bad_words=bad_words,
max_new_tokens=MAX_LENGTH,
num_beams=6,
# num_beam_groups=3,
# temperature=4.0,
# diversity_penalty=2.0,
# max_new_tokens=30,
no_repeat_ngram_size=2,
do_sample=True,
top_k=20,
# top_k=1,
top_p=0.9,
repetition_penalty=4.0,
length_penalty=20.0,
early_stopping=True,
# num_return_sequences=6
)
result = self.model.generate(
input_ids,
# attention_mask=attention_mask,
decoder_start_token_id=self.tokenizer.cls_token_id,
eos_token_id=self.tokenizer.sep_token_id,
# max_length=max_length,
# early_stopping=True,
# num_beams=num_beams,
**kwargs
)
```
<img width="526" alt="Pasted Graphic 20" src="https://user-images.githubusercontent.com/33918902/234769066-ee29b84f-90ca-46a4-942c-fef7adcdf1ba.png">
<img width="501" alt="Pasted Graphic 21" src="https://user-images.githubusercontent.com/33918902/234769088-348c19e6-d5db-42b9-b7ee-4a4c1b3690a6.png">
### Expected behavior
I think different batch_size should not do effect on generation.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23017/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23017/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23016
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23016/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23016/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23016/events
|
https://github.com/huggingface/transformers/issues/23016
| 1,686,131,551 |
I_kwDOCUB6oc5kgFNf
| 23,016 |
When download model. Error: DefaultCPUAllocator: can't allocate memory: you tried to allocate
|
{
"login": "af913337456",
"id": 10436277,
"node_id": "MDQ6VXNlcjEwNDM2Mjc3",
"avatar_url": "https://avatars.githubusercontent.com/u/10436277?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/af913337456",
"html_url": "https://github.com/af913337456",
"followers_url": "https://api.github.com/users/af913337456/followers",
"following_url": "https://api.github.com/users/af913337456/following{/other_user}",
"gists_url": "https://api.github.com/users/af913337456/gists{/gist_id}",
"starred_url": "https://api.github.com/users/af913337456/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/af913337456/subscriptions",
"organizations_url": "https://api.github.com/users/af913337456/orgs",
"repos_url": "https://api.github.com/users/af913337456/repos",
"events_url": "https://api.github.com/users/af913337456/events{/privacy}",
"received_events_url": "https://api.github.com/users/af913337456/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"what's wrong with the code above.",
"This means you do not have enough RAM to load the model, not disk memory. You can try adding `device_map=\"auto\"` to load directly the model on the GPUs if you have any, or `torch_dtype=torch.float16` to save 2x the memory (inference only)."
] | 1,682 | 1,682 | 1,682 |
NONE
| null |
### System Info
I want to download the GPT4all-J model.
according to this weblink : https://huggingface.co/nomic-ai/gpt4all-j
download code:
```python
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("nomic-ai/gpt4all-j", revision="v1.2-jazzy")
```
it got thie error in the end:
```
RuntimeError: [enforce fail at alloc_cpu.cpp:75] err == 0. DefaultCPUAllocator: can't allocate memory: you
tried to allocate 268435456 bytes. Error code 12 (Cannot allocate memory)
```
----
and the memory of my machine is enough for 268435456 bytes.
```
[root@VM-0-5-centos ~]# free
total used free shared buff/cache available
Mem: 19750992 386456 18700296 2024 664240 19070948
Swap: 0 0 0
```
> 19070948 KB > 268435456 bytes
---
@vanpelt @pvl @arfon @xeb
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. create the download python file;
```python
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("nomic-ai/gpt4all-j", revision="v1.2-jazzy")
```
2. run step 1;
3. wait for it;
4. got the error.
### Expected behavior
download success.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23016/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23016/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23015
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23015/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23015/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23015/events
|
https://github.com/huggingface/transformers/issues/23015
| 1,685,580,330 |
I_kwDOCUB6oc5kd-oq
| 23,015 |
Saving prediction for --do_predict and --predict_with_generate in transormers/examples/pytorch/question-answering /run_seq2seq_qa.py
|
{
"login": "venkatesh-das",
"id": 47103482,
"node_id": "MDQ6VXNlcjQ3MTAzNDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/47103482?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/venkatesh-das",
"html_url": "https://github.com/venkatesh-das",
"followers_url": "https://api.github.com/users/venkatesh-das/followers",
"following_url": "https://api.github.com/users/venkatesh-das/following{/other_user}",
"gists_url": "https://api.github.com/users/venkatesh-das/gists{/gist_id}",
"starred_url": "https://api.github.com/users/venkatesh-das/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/venkatesh-das/subscriptions",
"organizations_url": "https://api.github.com/users/venkatesh-das/orgs",
"repos_url": "https://api.github.com/users/venkatesh-das/repos",
"events_url": "https://api.github.com/users/venkatesh-das/events{/privacy}",
"received_events_url": "https://api.github.com/users/venkatesh-das/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Do you want to open a PR with those changes?",
"Yeah Sure I can do that.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,682 | 1,685 | 1,685 |
NONE
| null |
### Feature request
The feature for saving predictions for `--do_predict` and `--predict_with_generate` was not functional for `run_seq2seq_qa.py` module.
Missing code file -> `transformers/examples/pytorch/question-answering/trainer_seq2seq_qa.py`
Image of the section of code which should handle this.

### Motivation
Some other modules like run_summarization.py have that feature.
Motivation from `transformers/examples/pytorch/summarization/run_summarization.py`

### Your contribution
Adding a code snippet would help to save the predictions for `--do_predict` and `--predict_with_generate`.
Code changes to be done here -> `transformers/examples/pytorch/question-answering/trainer_seq2seq_qa.py`
```python
# Prediction
if training_args.do_predict:
logger.info("*** Predict ***")
results = trainer.predict(predict_dataset, predict_examples)
metrics = results.metrics
max_predict_samples = (
data_args.max_predict_samples if data_args.max_predict_samples is not None else len(predict_dataset)
)
metrics["predict_samples"] = min(max_predict_samples, len(predict_dataset))
trainer.log_metrics("predict", metrics)
trainer.save_metrics("predict", metrics)
# Added code section for saving predictions
if trainer.is_world_process_zero():
if training_args.predict_with_generate:
predictions = predict_results.predictions
predictions = [pred['prediction_text'] for pred in predictions]
output_prediction_file = os.path.join(training_args.output_dir, "generated_predictions.txt")
with open(output_prediction_file, "w") as writer:
writer.write("\n".join(predictions))
```

|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23015/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23015/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23014
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23014/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23014/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23014/events
|
https://github.com/huggingface/transformers/pull/23014
| 1,685,460,569 |
PR_kwDOCUB6oc5POTcR
| 23,014 |
🚨🚨🚨 Use default ignore index in Luke
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,682 | 1,682 | 1,682 |
COLLABORATOR
| null |
# What does this PR do?
As discussed in #22981, the `ignore_index` for Luke should be the same as all models in Transformers, even if it does not match the original authors implementation.
This is breaking but needed to align all models to have the same API.
Fixes #22981
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23014/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23014/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23014",
"html_url": "https://github.com/huggingface/transformers/pull/23014",
"diff_url": "https://github.com/huggingface/transformers/pull/23014.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23014.patch",
"merged_at": 1682546101000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23013
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23013/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23013/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23013/events
|
https://github.com/huggingface/transformers/pull/23013
| 1,685,335,884 |
PR_kwDOCUB6oc5PN4wo
| 23,013 |
Upgrading sentencepiece modeling file (for proto > 4 support).
|
{
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Can you push an empty commit with \"[all-test]\" in the message? I'd like to see if there is a problem with an older version of protobuf (CI should be on < 4).",
"I remember we said we should check if the protobuf version is installed somewhere, as we had some issues. Did it turn out to be ok?",
"It turns out the file in https://github.com/google/sentencepiece is actually not even valid protobuf 4.x ...",
"So this doesn't work with protobuf 4.x in the end?",
"Nope. it passes with protobuf 3.20 in the tests, but not with 4.x.....\r\n\r\nI think we should wait for them to upgrade before doing it ourselves (the .proto files are available so we could generate them with a 4.x compiler... but I don't like doing that.) As much as handling 4.x and 3.20 codebases is annoying I don't want to spend much time on this tbh.\r\n\r\nI'll close this PR, we can resurrrect later maybe.",
"Thanks for having tried!"
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
# What does this PR do?
Upgrades the file.
Taken from `google/sentencepiece` directly.
Should prevent "Downgrade the protobuf package".
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23013/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23013/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23013",
"html_url": "https://github.com/huggingface/transformers/pull/23013",
"diff_url": "https://github.com/huggingface/transformers/pull/23013.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23013.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23012
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23012/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23012/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23012/events
|
https://github.com/huggingface/transformers/pull/23012
| 1,685,264,313 |
PR_kwDOCUB6oc5PNpis
| 23,012 |
🌐 [i18n-KO] Translated `tasks/question_answering.mdx` to Korean
|
{
"login": "jungnerd",
"id": 46880056,
"node_id": "MDQ6VXNlcjQ2ODgwMDU2",
"avatar_url": "https://avatars.githubusercontent.com/u/46880056?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jungnerd",
"html_url": "https://github.com/jungnerd",
"followers_url": "https://api.github.com/users/jungnerd/followers",
"following_url": "https://api.github.com/users/jungnerd/following{/other_user}",
"gists_url": "https://api.github.com/users/jungnerd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jungnerd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jungnerd/subscriptions",
"organizations_url": "https://api.github.com/users/jungnerd/orgs",
"repos_url": "https://api.github.com/users/jungnerd/repos",
"events_url": "https://api.github.com/users/jungnerd/events{/privacy}",
"received_events_url": "https://api.github.com/users/jungnerd/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"May you please review this PR? 😄\r\n@sgugger, @ArthurZucker, @eunseojo"
] | 1,682 | 1,683 | 1,682 |
CONTRIBUTOR
| null |
<!-- PR의 제목은 "🌐 [i18n-KO] Translated `<your_file>.mdx` to Korean" 으로 부탁드립니다 -->
# What does this PR do?
Translated the `tasks/question_answering.mdx` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
<!-- 메인 이슈에 기록이 남아요! 가짜연구소 리포를 사용해 연습하실때는 제거해주시면 감사하겠습니다! :smile: -->
## Before reviewing
- [x] Check for missing / redundant translations (번역 누락/중복 검사)
- [x] Grammar Check (맞춤법 검사)
- [x] Review or Add new terms to glossary (용어 확인 및 추가)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
<!-- 1. 위 체크가 모두 완료된 뒤에만 가짜연구소 팀원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. 가짜연구소 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
@sgugger, @ArthurZucker, @eunseojo May you please review this PR?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23012/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23012/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23012",
"html_url": "https://github.com/huggingface/transformers/pull/23012",
"diff_url": "https://github.com/huggingface/transformers/pull/23012.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23012.patch",
"merged_at": 1682953540000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23011
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23011/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23011/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23011/events
|
https://github.com/huggingface/transformers/pull/23011
| 1,685,215,795 |
PR_kwDOCUB6oc5PNfLD
| 23,011 |
Remove a failing ONNX test
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,682 | 1,682 | 1,682 |
COLLABORATOR
| null |
# What does this PR do?
Same as in #22660, but for `swin` after the recent PR #22893
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23011/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23011/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23011",
"html_url": "https://github.com/huggingface/transformers/pull/23011",
"diff_url": "https://github.com/huggingface/transformers/pull/23011.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23011.patch",
"merged_at": 1682523853000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23010
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23010/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23010/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23010/events
|
https://github.com/huggingface/transformers/pull/23010
| 1,685,207,965 |
PR_kwDOCUB6oc5PNde6
| 23,010 |
Add Trainer support for ReduceLROnPlateau
|
{
"login": "pie3636",
"id": 9118775,
"node_id": "MDQ6VXNlcjkxMTg3NzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/9118775?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pie3636",
"html_url": "https://github.com/pie3636",
"followers_url": "https://api.github.com/users/pie3636/followers",
"following_url": "https://api.github.com/users/pie3636/following{/other_user}",
"gists_url": "https://api.github.com/users/pie3636/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pie3636/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pie3636/subscriptions",
"organizations_url": "https://api.github.com/users/pie3636/orgs",
"repos_url": "https://api.github.com/users/pie3636/repos",
"events_url": "https://api.github.com/users/pie3636/events{/privacy}",
"received_events_url": "https://api.github.com/users/pie3636/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the review! I believe this should do it. There isn't much in the way of default arguments, but `ReduceLROnPlateau` is quite different from other schedulers in the first place.",
"Hi @pie3636 and @sgugger , \r\nThanks for this PR !\r\nI would like to use the ReduceLROnPlateau scheduler but I don't understand which parameters (patience; factor; cooldown) it has by default and where I can change them.\r\nIf I'm right it uses the default ones :\r\n`lr_scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, factor=0.2, patience=5, cooldown=2)`\r\nand if I want to test new ones I have to build my own scheduler and pass it to hf?\r\nThanks a lot for this new feature !",
"@lombardata The values you are referring to (`factor=0.2, patience=5, cooldown=2`) are example values used in the unit test.\r\nThe actual default parameters are [the ones provided by pytorch](https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.ReduceLROnPlateau.html), which are, as I'm writing these lines, `optimizer, mode='min', factor=0.1, patience=10, threshold=0.0001, threshold_mode='rel', cooldown=0, min_lr=0, eps=1e-08, verbose=False`. \r\n\r\nIf you want to use different values, you indeed need to build your own scheduler and optimizer, and pass them as a tuple to the `Trainer` class using the `optimizers` argument.",
"Ok, thank you very much @pie3636 for the clarification.\r\nIf anyone read this post and is interested in how to pass a custom reducelronplateau scheduler to the Trainer, here is a simple way to do it :\r\n`optimizer = torch.optim.Adam(model.parameters(), lr = 0.01, weight_decay = 0.0001)\r\n`\r\n`lr_scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.1, patience=5, verbose=True)`\r\n\r\n```\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n data_collator=collate_fn,\r\n train_dataset=prepared_ds[\"train\"],\r\n eval_dataset=prepared_ds[\"validation\"],\r\n tokenizer=feature_extractor,\r\n compute_metrics=compute_metrics,\r\n optimizers=(optimizer, lr_scheduler)\r\n)\r\n\r\n```"
] | 1,682 | 1,699 | 1,682 |
CONTRIBUTOR
| null |
# What does this PR do?
This PR solves #16503 by adding support to pytorch's [ReduceLROnPlateau](https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.ReduceLROnPlateau.html) to `Trainer`.
It does so by adding a new `REDUCE_ON_PLATEAU` field to `SchedulerType` and a new `reduce_lr_on_plateau_args` parameter to `TrainingArguments` that is parsed at initialization to avoid adding 9 new individual arguments. The scheduler re-uses the metric stored in `metric_for_best_model`, and is delayed to run after evaluation since it requires metrics to be populated.
I'm not sure whether it is due to the complexity of `Trainer`, my lack of experience (this is my first PR to a large project) or the uniqueness of `ReduceLROnPlateau` compared to other schedulers, but this PR feels a bit hacky, so I welcome any feedback.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Looking at #16503, I believe this is for @sgugger.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23010/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23010/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23010",
"html_url": "https://github.com/huggingface/transformers/pull/23010",
"diff_url": "https://github.com/huggingface/transformers/pull/23010.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23010.patch",
"merged_at": 1682687851000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23009
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23009/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23009/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23009/events
|
https://github.com/huggingface/transformers/issues/23009
| 1,685,154,484 |
I_kwDOCUB6oc5kcWq0
| 23,009 |
whisper identified the wrong language
|
{
"login": "LYPinASR",
"id": 112866899,
"node_id": "U_kgDOBro2Uw",
"avatar_url": "https://avatars.githubusercontent.com/u/112866899?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LYPinASR",
"html_url": "https://github.com/LYPinASR",
"followers_url": "https://api.github.com/users/LYPinASR/followers",
"following_url": "https://api.github.com/users/LYPinASR/following{/other_user}",
"gists_url": "https://api.github.com/users/LYPinASR/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LYPinASR/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LYPinASR/subscriptions",
"organizations_url": "https://api.github.com/users/LYPinASR/orgs",
"repos_url": "https://api.github.com/users/LYPinASR/repos",
"events_url": "https://api.github.com/users/LYPinASR/events{/privacy}",
"received_events_url": "https://api.github.com/users/LYPinASR/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi there. Questions like this are better suited on the [forums](https://discuss.huggingface.co/) or a discussion on the model page as we keep issues for bugs and feature requests only.",
"If you use pipeline, you should add option like \r\ngenerate_kwargs = {\"task\":\"transcribe\", \"language\":\"<|fr|>\"}\r\n\r\n\r\nref1: https://colab.research.google.com/drive/1rS1L4YSJqKUH_3YxIQHBI982zso23wor#scrollTo=dPD20IkEDsbG\r\nref2: https://github.com/huggingface/transformers/issues/22331\r\n\r\nhowever, I think default task should be \"transcribe\" not \"translate\". I insist It's an error.",
"I have solved the problem.\r\nStep 1: Upgrade transformers, unfixed.\r\nStep 2: Add option like \"generate_kwargs = {\"task\":\"transcribe\", \"language\":\"<|fr|>\"}\", unfixed.\r\nStep 3: Add a line like \"pipe.model.config.forced_decoder_ids = pipe.tokenizer.get_decoder_prompt_ids(language=\"ko\", task=\"transcribe\")\", fixed.\r\n\r\nHowever, I still don't understand why the original model output is English but the fine-tuned model output is in Korean.",
"maybe you can checked your fine-tuned model's config.json or generation_config.json, double check the default task type, I think it's null or \"transcribe\"",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,682 | 1,685 | 1,685 |
NONE
| null |
### Feature request
When I follow the example of long-form transcription for whisper-large with Korean, the result is English. But after finetuning the whisper-large model with some Korean data, the checkpoint can output Korean. I also test other model size, but all the models output English.
I was confused about it. How should I do to output Korean with the original model?
Thank you!
### Motivation
Test whisper in Korean.
### Your contribution
Test whisper in Korean.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23009/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23009/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23008
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23008/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23008/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23008/events
|
https://github.com/huggingface/transformers/pull/23008
| 1,685,018,030 |
PR_kwDOCUB6oc5PM0ob
| 23,008 |
🌐 [i18n-KO] Translated `multilingual.mdx` to Korean
|
{
"login": "HanNayeoniee",
"id": 33839093,
"node_id": "MDQ6VXNlcjMzODM5MDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/33839093?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HanNayeoniee",
"html_url": "https://github.com/HanNayeoniee",
"followers_url": "https://api.github.com/users/HanNayeoniee/followers",
"following_url": "https://api.github.com/users/HanNayeoniee/following{/other_user}",
"gists_url": "https://api.github.com/users/HanNayeoniee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HanNayeoniee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HanNayeoniee/subscriptions",
"organizations_url": "https://api.github.com/users/HanNayeoniee/orgs",
"repos_url": "https://api.github.com/users/HanNayeoniee/repos",
"events_url": "https://api.github.com/users/HanNayeoniee/events{/privacy}",
"received_events_url": "https://api.github.com/users/HanNayeoniee/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
<!-- PR의 제목은 "🌐 [i18n-KO] Translated `<your_file>.mdx` to Korean" 으로 부탁드립니다 -->
# What does this PR do?
Translated the `multilingual.mdx` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
<!-- 메인 이슈에 기록이 남아요! 가짜연구소 리포를 사용해 연습하실때는 제거해주시면 감사하겠습니다! :smile: -->
## Before reviewing
- [x] Check for missing / redundant translations (번역 누락/중복 검사)
- [x] Grammar Check (맞춤법 검사)
- [x] Review or Add new terms to glossary (용어 확인 및 추가)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
<!-- 1. 위 체크가 모두 완료된 뒤에만 가짜연구소 팀원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. 가짜연구소 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
@sgugger, @ArthurZucker, @eunseojo May you please review this PR?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23008/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23008/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23008",
"html_url": "https://github.com/huggingface/transformers/pull/23008",
"diff_url": "https://github.com/huggingface/transformers/pull/23008.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23008.patch",
"merged_at": 1682597172000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23007
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23007/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23007/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23007/events
|
https://github.com/huggingface/transformers/pull/23007
| 1,684,996,477 |
PR_kwDOCUB6oc5PMv6s
| 23,007 |
Update test_beam_constraints.py
|
{
"login": "mzamini92",
"id": 32536264,
"node_id": "MDQ6VXNlcjMyNTM2MjY0",
"avatar_url": "https://avatars.githubusercontent.com/u/32536264?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mzamini92",
"html_url": "https://github.com/mzamini92",
"followers_url": "https://api.github.com/users/mzamini92/followers",
"following_url": "https://api.github.com/users/mzamini92/following{/other_user}",
"gists_url": "https://api.github.com/users/mzamini92/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mzamini92/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mzamini92/subscriptions",
"organizations_url": "https://api.github.com/users/mzamini92/orgs",
"repos_url": "https://api.github.com/users/mzamini92/repos",
"events_url": "https://api.github.com/users/mzamini92/events{/privacy}",
"received_events_url": "https://api.github.com/users/mzamini92/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,682 | 1,682 | 1,682 |
NONE
| null |
# What does this PR do?
The advantage of using `assertEqual` over `==` is that it provides more informative error messages in case of a failure. For example, if you use `assertEqual(a, b)` and the assertion fails, the error message will include the values of a and b as well as the test name and line number. This makes it easier to identify and fix the problem. Similarly, the advantage of using `assert_` over `==` is that it provides a more informative error `message. assert_` is a method provided by the `unittest.TestCase` class that takes a single argument and asserts that it evaluates to True. If the argument is not True, the test fails and an error message is printed that includes the test name and line number.
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [+] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [+] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [-] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [+] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [+] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23007/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23007/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23007",
"html_url": "https://github.com/huggingface/transformers/pull/23007",
"diff_url": "https://github.com/huggingface/transformers/pull/23007.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23007.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/23006
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23006/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23006/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23006/events
|
https://github.com/huggingface/transformers/pull/23006
| 1,684,951,719 |
PR_kwDOCUB6oc5PMmOT
| 23,006 |
[`PEFT`] Add HFTracer support for PEFT
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I'm not against the auto-formatting changes, but could we have them in a separate PR please? This is polluting the diff here."
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
# What does this PR do?
For more context, PiPPy is a library for an out-of-the-box Pipeline Parallelism for torch models. PiPPY heavily relies on HF tracer under the hood.
Some interest has grown to support PiPPy for PEFT models: https://github.com/huggingface/peft/issues/194#issuecomment-1496767740 but it appeared that before this PR PEFT models were not supported by the HF tracer for many reasons. Therefore this PR addresses this, by doing precisely:
1- Relaxing the constraints for the model check in the tracing mechanism
2- Define a proper `__iter__` method for `HFProxy` class to properly handle `**kwargs` calls in the forward passes
A proper testing suite will be added in PEFT, as a set of slow tests, as the GH runner uses an environment that is complied with the main branch of transformers.
Thanks a lot @michaelbenayoun for digging the issue with me
cc @michaelbenayoun @sgugger @pacman100
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23006/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23006/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23006",
"html_url": "https://github.com/huggingface/transformers/pull/23006",
"diff_url": "https://github.com/huggingface/transformers/pull/23006.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23006.patch",
"merged_at": 1682527506000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23005
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23005/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23005/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23005/events
|
https://github.com/huggingface/transformers/issues/23005
| 1,684,735,929 |
I_kwDOCUB6oc5kawe5
| 23,005 |
Pycharm Debug Mode Errors
|
{
"login": "taeyeopl",
"id": 50736858,
"node_id": "MDQ6VXNlcjUwNzM2ODU4",
"avatar_url": "https://avatars.githubusercontent.com/u/50736858?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taeyeopl",
"html_url": "https://github.com/taeyeopl",
"followers_url": "https://api.github.com/users/taeyeopl/followers",
"following_url": "https://api.github.com/users/taeyeopl/following{/other_user}",
"gists_url": "https://api.github.com/users/taeyeopl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/taeyeopl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/taeyeopl/subscriptions",
"organizations_url": "https://api.github.com/users/taeyeopl/orgs",
"repos_url": "https://api.github.com/users/taeyeopl/repos",
"events_url": "https://api.github.com/users/taeyeopl/events{/privacy}",
"received_events_url": "https://api.github.com/users/taeyeopl/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"There is nothing in that traceback that is linked to the Transformers package. It is all in stable-dreamfusion.",
"Thanks, I will recheck it!"
] | 1,682 | 1,682 | 1,682 |
NONE
| null |
### System Info
I am using Pycham debug mode, and It has no problem in transformers==4.24.0, but after version 4.24.0, it get below errors during debug mode due to transformers. The code works without debug mode but only gets a problem during debug modes due to the transformer library when the version is above 4.24.0). My environment is Ubuntu 18.04, Torch 1.12.1, CUDA 11.3
```
/home/miruware/anaconda3/envs/dreamfusion/bin/python /snap/pycharm-professional/331/plugins/python/helpers/pydev/pydevd.py --multiprocess --qt-support=auto --client 127.0.0.1 --port 32875 --file /home/miruware/ssd_4tb/diffusion/workspace/stable-dreamfusion/main.py -O --image data/hamburger_rgba.png --workspace results/test --iters 5000
/home/miruware/anaconda3/envs/dreamfusion/lib/python3.9/site-packages/transformers/models/clip/feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead.
warnings.warn(
╭───────────────────── Traceback (most recent call last) ──────────────────────╮
│ /snap/pycharm-professional/331/plugins/python/helpers/pydev/pydevd.py:55 in │
│ <module> │
│ │
│ 52 from _pydevd_bundle.pydevd_custom_frames import CustomFramesContainer │
│ 53 from _pydevd_bundle.pydevd_frame_utils import add_exception_to_frame, │
│ 54 from _pydevd_bundle.pydevd_kill_all_pydevd_threads import kill_all_py │
│ ❱ 55 from _pydevd_bundle.pydevd_trace_dispatch import ( │
│ 56 │ trace_dispatch as _trace_dispatch, global_cache_skips, global_cac │
│ 57 from _pydevd_frame_eval.pydevd_frame_eval_main import ( │
│ 58 │ frame_eval_func, dummy_trace_dispatch, show_frame_eval_warning) │
│ │
│ /snap/pycharm-professional/331/plugins/python/helpers/pydev/_pydevd_bundle/p │
│ ydevd_trace_dispatch.py:60 in <module> │
│ │
│ 57 elif use_cython is None: │
│ 58 │ # Regular: use fallback if not found and give message to user │
│ 59 │ try: │
│ ❱ 60 │ │ from _pydevd_bundle.pydevd_cython_wrapper import trace_dispatch │
│ 61 │ │ def trace_dispatch(py_db, frame, event, arg): │
│ 62 │ │ │ if _trace_dispatch is None: │
│ 63 │ │ │ │ return None │
│ │
│ /snap/pycharm-professional/331/plugins/python/helpers/pydev/_pydevd_bundle/p │
│ ydevd_cython_wrapper.py:4 in <module> │
│ │
│ 1 import sys │
│ 2 │
│ 3 # This version number is always available │
│ ❱ 4 from _pydevd_bundle.pydevd_additional_thread_info_regular import versio │
│ 5 │
│ 6 try: │
│ 7 │ try: │
│ │
│ /snap/pycharm-professional/331/plugins/python/helpers/pydev/_pydevd_bundle/p │
│ ydevd_additional_thread_info_regular.py:7 in <module> │
│ │
│ 4 # IFDEF CYTHON │
│ 5 # pydev_log.debug("Using Cython speedups") │
│ 6 # ELSE │
│ ❱ 7 from _pydevd_bundle.pydevd_frame import PyDBFrame │
│ 8 # ENDIF │
│ 9 │
│ 10 version = 37 │
│ │
│ /snap/pycharm-professional/331/plugins/python/helpers/pydev/_pydevd_bundle/p │
│ ydevd_frame.py:32 in <module> │
│ │
│ 29 from _pydevd_bundle.pydevd_constants import IS_PY2 │
│ 30 │
│ 31 try: │
│ ❱ 32 │ from _pydevd_bundle.pydevd_signature import send_signature_call_tr │
│ 33 except ImportError: │
│ 34 │ def send_signature_call_trace(*args, **kwargs): │
│ 35 │ │ pass │
│ │
│ /snap/pycharm-professional/331/plugins/python/helpers/pydev/_pydevd_bundle/p │
│ ydevd_signature.py:3 in <module> │
│ │
│ 1 │
│ 2 try: │
│ ❱ 3 │ import trace │
│ 4 except ImportError: │
│ 5 │ pass │
│ 6 else: │
│ │
│ /home/miruware/ssd_4tb/diffusion/workspace/stable-dreamfusion/trace.py:31 in │
│ <module> │
│ │
│ 28 unet.forward = functools.partial(unet.forward, return_dict=False) # se │
│ 29 │
│ 30 # load inputs │
│ ❱ 31 train_latent_model_input = torch.load("train_latent_model_input.pt").to │
│ 32 train_t = torch.load("train_t.pt").to(torch.float16) │
│ 33 train_text_embeddings = torch.load("train_text_embeddings.pt").to(torch │
│ 34 │
│ │
│ /home/miruware/anaconda3/envs/dreamfusion/lib/python3.9/site-packages/torch/ │
│ serialization.py:699 in load │
│ │
│ 696 │ if 'encoding' not in pickle_load_args.keys(): │
│ 697 │ │ pickle_load_args['encoding'] = 'utf-8' │
│ 698 │ │
│ ❱ 699 │ with _open_file_like(f, 'rb') as opened_file: │
│ 700 │ │ if _is_zipfile(opened_file): │
│ 701 │ │ │ # The zipfile reader is going to advance the current file │
│ 702 │ │ │ # If we want to actually tail call to torch.jit.load, we │
│ │
│ /home/miruware/anaconda3/envs/dreamfusion/lib/python3.9/site-packages/torch/ │
│ serialization.py:230 in _open_file_like │
│ │
│ 227 │
│ 228 def _open_file_like(name_or_buffer, mode): │
│ 229 │ if _is_path(name_or_buffer): │
│ ❱ 230 │ │ return _open_file(name_or_buffer, mode) │
│ 231 │ else: │
│ 232 │ │ if 'w' in mode: │
│ 233 │ │ │ return _open_buffer_writer(name_or_buffer) │
│ │
│ /home/miruware/anaconda3/envs/dreamfusion/lib/python3.9/site-packages/torch/ │
│ serialization.py:211 in __init__ │
│ │
│ 208 │
│ 209 class _open_file(_opener): │
│ 210 │ def __init__(self, name, mode): │
│ ❱ 211 │ │ super(_open_file, self).__init__(open(name, mode)) │
│ 212 │ │
│ 213 │ def __exit__(self, *args): │
│ 214 │ │ self.file_like.close() │
╰──────────────────────────────────────────────────────────────────────────────╯
FileNotFoundError: [Errno 2] No such file or directory:
'train_latent_model_input.pt'
Process finished with exit code 1
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Any example using official diffusers(0.15.1) code with transformers(4.28.1) library!
### Expected behavior
No Errors during Pycharm debug modes
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23005/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23005/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23004
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23004/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23004/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23004/events
|
https://github.com/huggingface/transformers/pull/23004
| 1,684,714,313 |
PR_kwDOCUB6oc5PLy-S
| 23,004 |
🚨🚨🚨 [`Pix2Struct`] Attempts to fix training issues 🚨🚨🚨
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"For me it's the same thing as what we discussed with Luke yesterday. It's important to have a consistent API so 100% for:\r\n- leaving label_smoothing out of the loss computation by default (users can compute the loss themselves by not passing the logits or using the Trainer with label_smoothing)\r\n- using -100 as ignore index and not the pad token (this is something I should have caught in the review, and we have already gone to a lot of trouble to harmonize all models to this)",
"Thanks for the review! \r\nLet me know if I should also make the changes for BLIP as well as you suggested @amyeroberts ",
"@younesbelkada Yes please! ",
"Hmm actually this broke some slow tests, I will need to dig more into that combined with https://github.com/huggingface/transformers/issues/22903#issuecomment-1525771904 , will let you know\r\n",
"False alarm, I will just properly document how to use pix2struct for conditional text generation!",
"Hi,\r\n\r\nI'm having problems when trying to fine-tune the pix2struct model.\r\nI am basing myself on the notebook from https://github.com/NielsRogge/Transformers-Tutorials/tree/master/Pix2Struct and that is why I have generated an issue in that repository where I explain my problem in detail: https://github.com/NielsRogge/Transformers-Tutorials/issues/293\r\n\r\nMainly it is that after several trainings and inferences with the generated models I see that the result of the inference is always the same, regardless of the input image.\r\n\r\nDo you know what could be happening or how to fix it?\r\n"
] | 1,682 | 1,684 | 1,682 |
CONTRIBUTOR
| null |
# What does this PR do?
This PR attempts to partially fix: https://github.com/huggingface/transformers/issues/22903 for better user experience when training `Pix2Struct`.
As stated in the aformentioned issue, some users are having hard times to train pix2struct, for many reasons, some of them being:
- Force adding the special tokens when encoding text --> otherwise the model will keep repeating the generated text
- Remove label smoothing to comply with other model architectures design
- Also remove label masking to be consistent with other models. As referred in https://github.com/huggingface/transformers/issues/22903#issuecomment-1518275840 I agree it should be users responsibility to add that masking
With these fixes, the following script:
```python
import requests
from PIL import Image
from transformers import Pix2StructForConditionalGeneration, AutoProcessor
from torch.optim import AdamW
import torch
torch.manual_seed(42)
model = Pix2StructForConditionalGeneration.from_pretrained("google/pix2struct-base", torch_dtype=torch.bfloat16)
processor = AutoProcessor.from_pretrained("google/pix2struct-base")
dummy_target = "The model should overfit this sentence"
image_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg"
image = Image.open(requests.get(image_url, stream=True).raw)
encoded_image = processor(images=image, return_tensors="pt")
encoded_text = processor(text=dummy_target, return_tensors='pt', max_length=20)
optimizer = AdamW(model.parameters(), lr=1e-4)
model.train()
device = 'cuda' if torch.cuda.is_available() else 'cpu'
model.to(device)
flattened_patches=encoded_image.flattened_patches.to(device).to(torch.bfloat16)
attention_mask=encoded_image.attention_mask.to(device)
labels=encoded_text.input_ids.to(device)
for i in range(1000):
outputs = model(
flattened_patches=flattened_patches,
attention_mask=attention_mask,
labels=labels
)
loss = outputs.loss
loss.backward()
optimizer.step()
optimizer.zero_grad()
if i % 50 == 0:
model.eval()
prediction = model.generate(
flattened_patches=flattened_patches,
attention_mask=attention_mask)
print(f'step: {i} train_loss: {loss.item()} prediction: {processor.batch_decode(prediction)}')
model.train()
```
Goes from outputting:
```bash
step: 0 train_loss: 8.259493827819824 prediction: ['<pad> <img_src=cropped-img-20180924']
step: 50 train_loss: 1.9695181846618652 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over']
step: 100 train_loss: 2.071323871612549 prediction: ['<pad> <The model should overfit this sentence should overfit this sentence should overfit this sentence should']
step: 150 train_loss: 2.0366554260253906 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over']
step: 200 train_loss: 1.8225889205932617 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over']
step: 250 train_loss: 1.6568734645843506 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over']
step: 300 train_loss: 1.6770282983779907 prediction: ['<pad> The model should overfit this sentence sentence should overfit this sentence sentence should overfit this sentence']
step: 350 train_loss: 1.688515067100525 prediction: ['<pad> The model should overfit this sentence sentence overfit this sentence sentence overfit this sentence sentence over']
step: 400 train_loss: 1.6118296384811401 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over']
step: 450 train_loss: 1.6204414367675781 prediction: ['<pad> The model should overfit this sentence sentence should overfit this sentence should overfit this sentence should']
step: 500 train_loss: 1.59645676612854 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over']
step: 550 train_loss: 1.5818239450454712 prediction: ['<pad> The model should overfit this sentence sentence sentence sentence sentence sentence sentence sentence sentence sentence sentence sentence sentence']
step: 600 train_loss: 1.5775129795074463 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over']
step: 650 train_loss: 1.561257243156433 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over']
step: 700 train_loss: 1.5319150686264038 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over']
step: 750 train_loss: 1.646193504333496 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over']
step: 800 train_loss: 1.533736228942871 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over']
step: 850 train_loss: 1.6203268766403198 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over']
step: 900 train_loss: 1.5132172107696533 prediction: ['<pad> The model should overfit this sentence sentence should overfit this sentence sentence should overfit this sentence']
step: 950 train_loss: 1.491452693939209 prediction: ['<pad> The model should overfit this sentence The model should overfit this sentence The model should overfit']
```
To:
```bash
step: 0 train_loss: 9.75 prediction: ['<pad> <<img_src=1> <img_src=2> <img_src=']
step: 50 train_loss: 0.125 prediction: ['<pad> <<img_src=1> <img_src=1> <img_src=']
step: 100 train_loss: 0.0089111328125 prediction: ['<pad> The model should overfit this sentence</s>']
...
```
cc @sgugger @amyeroberts @NielsRogge
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23004/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23004/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23004",
"html_url": "https://github.com/huggingface/transformers/pull/23004",
"diff_url": "https://github.com/huggingface/transformers/pull/23004.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23004.patch",
"merged_at": 1682526565000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23003
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23003/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23003/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23003/events
|
https://github.com/huggingface/transformers/issues/23003
| 1,684,681,229 |
I_kwDOCUB6oc5kajIN
| 23,003 |
PipelineChunkIterator does not provide the correct length
|
{
"login": "adrianeboyd",
"id": 5794899,
"node_id": "MDQ6VXNlcjU3OTQ4OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5794899?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adrianeboyd",
"html_url": "https://github.com/adrianeboyd",
"followers_url": "https://api.github.com/users/adrianeboyd/followers",
"following_url": "https://api.github.com/users/adrianeboyd/following{/other_user}",
"gists_url": "https://api.github.com/users/adrianeboyd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adrianeboyd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adrianeboyd/subscriptions",
"organizations_url": "https://api.github.com/users/adrianeboyd/orgs",
"repos_url": "https://api.github.com/users/adrianeboyd/repos",
"events_url": "https://api.github.com/users/adrianeboyd/events{/privacy}",
"received_events_url": "https://api.github.com/users/adrianeboyd/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi @adrianeboyd I am experiencing this same issue. Did you manage to solve this? Or was it just a versioning thingy?",
"As far as I know this hasn't changed in any newer releases. I think that the implementation works in practice, but it triggers these warnings from pytorch that are trying to protect you from yourself in case you've written a faulty iterator. The problem is that it's returning as the length the number of texts rather than the number of (strided) subtexts that will be processed in the end. But with the multiple levels of iterators involved I wasn't sure how to fix it for all possible use cases.",
"Cool, thanks for the explanation. I had not spent time on it yet but will ignore and disable the warnings for now.",
" I am having the same problem. Hope it gets fixed soon."
] | 1,682 | 1,699 | 1,685 |
NONE
| null |
### System Info
- `transformers` version: 4.28.1
- PyTorch version (GPU?): 2.0.0+cu117 (True)
### Who can help?
@Narsil
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Relatively minor in the scheme of things, but I looked into it a bit to make sure it wasn't an issue with batching.
```python
from transformers import pipeline
pipe = pipeline("token-classification")
pipe(["New York " * 600] * 2, stride=0)
```
Leads to noisy warnings from torch:
```none
/tmp/venv/lib/python3.8/site-packages/torch/utils/data/dataloader.py:646: UserWarning: Length of IterableDataset <transformers.pipelines.pt_utils.PipelineChunkIterator object at 0x7f084a9bce50> was reported to be 2 (when accessing len(dataloader)), but 3 samples have been fetched.
warnings.warn(warn_msg)
/tmp/venv/lib/python3.8/site-packages/torch/utils/data/dataloader.py:646: UserWarning: Length of IterableDataset <transformers.pipelines.pt_utils.PipelineChunkIterator object at 0x7f084a9bce50> was reported to be 2 (when accessing len(dataloader)), but 4 samples have been fetched.
warnings.warn(warn_msg)
/tmp/venv/lib/python3.8/site-packages/torch/utils/data/dataloader.py:646: UserWarning: Length of IterableDataset <transformers.pipelines.pt_utils.PipelineChunkIterator object at 0x7f084a9bce50> was reported to be 2 (when accessing len(dataloader)), but 5 samples have been fetched.
warnings.warn(warn_msg)
/tmp/venv/lib/python3.8/site-packages/torch/utils/data/dataloader.py:646: UserWarning: Length of IterableDataset <transformers.pipelines.pt_utils.PipelineChunkIterator object at 0x7f084a9bce50> was reported to be 2 (when accessing len(dataloader)), but 6 samples have been fetched.
warnings.warn(warn_msg)
```
### Expected behavior
`PipelineChunkIterator` provides the intended length, no noisy warnings.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23003/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23003/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23002
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23002/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23002/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23002/events
|
https://github.com/huggingface/transformers/pull/23002
| 1,684,534,650 |
PR_kwDOCUB6oc5PLL9v
| 23,002 |
added GPTNeoXForTokenClassification
|
{
"login": "peter-sk",
"id": 6168908,
"node_id": "MDQ6VXNlcjYxNjg5MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/6168908?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/peter-sk",
"html_url": "https://github.com/peter-sk",
"followers_url": "https://api.github.com/users/peter-sk/followers",
"following_url": "https://api.github.com/users/peter-sk/following{/other_user}",
"gists_url": "https://api.github.com/users/peter-sk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/peter-sk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/peter-sk/subscriptions",
"organizations_url": "https://api.github.com/users/peter-sk/orgs",
"repos_url": "https://api.github.com/users/peter-sk/repos",
"events_url": "https://api.github.com/users/peter-sk/events{/privacy}",
"received_events_url": "https://api.github.com/users/peter-sk/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Ready for review, @ArthurZucker and @younesbelkada 👍 "
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
# What does this PR do?
It adds the class GPTNeoXForTokenClassification, which allows using GPT NeoX models for token classification tasks. The implementation follows the one for other models (such as GPT2 and GPT Neo) closely and simply adds a linear layer after the hidden states.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker
@younesbelkada
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23002/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23002/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/23002",
"html_url": "https://github.com/huggingface/transformers/pull/23002",
"diff_url": "https://github.com/huggingface/transformers/pull/23002.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/23002.patch",
"merged_at": 1682608106000
}
|
https://api.github.com/repos/huggingface/transformers/issues/23001
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23001/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23001/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23001/events
|
https://github.com/huggingface/transformers/issues/23001
| 1,684,455,361 |
I_kwDOCUB6oc5kZr_B
| 23,001 |
`return_overflowing_tokens` has different behavior between slow tokenizer and fast tokenizer
|
{
"login": "BuxianChen",
"id": 30834226,
"node_id": "MDQ6VXNlcjMwODM0MjI2",
"avatar_url": "https://avatars.githubusercontent.com/u/30834226?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BuxianChen",
"html_url": "https://github.com/BuxianChen",
"followers_url": "https://api.github.com/users/BuxianChen/followers",
"following_url": "https://api.github.com/users/BuxianChen/following{/other_user}",
"gists_url": "https://api.github.com/users/BuxianChen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BuxianChen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BuxianChen/subscriptions",
"organizations_url": "https://api.github.com/users/BuxianChen/orgs",
"repos_url": "https://api.github.com/users/BuxianChen/repos",
"events_url": "https://api.github.com/users/BuxianChen/events{/privacy}",
"received_events_url": "https://api.github.com/users/BuxianChen/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] |
closed
| false | null |
[] |
[
"cc @ArthurZucker but I think the overflowing tokens is specifically a feature of our fast tokenizers, so it's completely normal that you don't ahve it in the slow ones.",
"Hey! Thanks for reporting this. No it seems that the `return_overflowing_tokens` logic is implemented in the base class, so might be interesting to look at this. I'll have a look when I can, in the mean time labelling as a tokenizers bug\r\n",
"Okay, it seems that there is a difference in design, `tokenizers` library returns a batch of overflowing tokens, which takes into account the max length and stride. So it creates a batch from a non batched sentence, which could (?) be what was originally intended. However, this will fail if `return_tensors=True` with an error. \r\nOn the other hand, `transformers` just cuts the input sentence and returns everything that was truncated, without creating this strange behaviour. \r\nI am not really sure what is best honestly, cc @Narsil I think it's fine to just leave it as is? ( I can edit the doc to make sure that the format in slow is different from fast ?)",
"Yes I'm not sure we should do something about it.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"> This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\r\n> \r\n> Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.\r\n\r\nThe problem still exists in the latest version",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,682 | 1,692 | 1,692 |
NONE
| null |
### System Info
- `transformers` version: 4.28.1
- Platform: Linux-5.10.147+-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0+cu118 (False)
- Tensorflow version (GPU?): 2.12.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.6.8 (cpu)
- Jax version: 0.4.8
- JaxLib version: 0.4.7
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@Arthur
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I'm studying the [nlp course chapter 6](https://huggingface.co/learn/nlp-course/en/chapter6/3b), and find `return_overflowing_tokens` has different behavior between slow tokenizer and fast tokenizer, is it a feature or a bug?
```python
from transformers import DistilBertTokenizer, DistilBertTokenizerFast
model_checkpoint = "distilbert-base-cased-distilled-squad"
slow_tokenizer = DistilBertTokenizer.from_pretrained(model_checkpoint)
fast_tokenizer = DistilBertTokenizerFast.from_pretrained(model_checkpoint)
```
```python
sentence = "This sentence is not too long but we are going to split it anyway."
inputs = fast_tokenizer(
sentence, truncation=True, return_overflowing_tokens=True, max_length=6, stride=2
)
print(inputs["input_ids"])
```
Then I got the output
```
[[101, 1188, 5650, 1110, 1136, 1315, 1263, 102], [101, 1315, 1263, 1133, 1195, 1132, 1280, 102], [101, 1132, 1280, 1106, 3325, 1122, 4050, 102], [101, 1122, 4050, 119, 102]]
```
but when I replace `fast_tokenizer` with `slow_tokenizer`, I got
```
[101, 1188, 5650, 1110, 1136, 1315, 1263, 102]
```
### Expected behavior
The slow tokenizer should behave same as fast tokenizer.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23001/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23001/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/23000
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/23000/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/23000/comments
|
https://api.github.com/repos/huggingface/transformers/issues/23000/events
|
https://github.com/huggingface/transformers/issues/23000
| 1,684,437,734 |
I_kwDOCUB6oc5kZnrm
| 23,000 |
Possible bug in BlipForQuestionAnswering loss computation due to redundant right-shift
|
{
"login": "verityw",
"id": 39648931,
"node_id": "MDQ6VXNlcjM5NjQ4OTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/39648931?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/verityw",
"html_url": "https://github.com/verityw",
"followers_url": "https://api.github.com/users/verityw/followers",
"following_url": "https://api.github.com/users/verityw/following{/other_user}",
"gists_url": "https://api.github.com/users/verityw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/verityw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/verityw/subscriptions",
"organizations_url": "https://api.github.com/users/verityw/orgs",
"repos_url": "https://api.github.com/users/verityw/repos",
"events_url": "https://api.github.com/users/verityw/events{/privacy}",
"received_events_url": "https://api.github.com/users/verityw/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @younesbelkada I think the thread above is correct, either:\r\n* `input_ids` and `labels` are the same but then one needs to shift the `logits` when computing the loss. Examples of models that do this are GPT-2, GPT-J, etc\r\n* you shift the `input_ids` (actually `decoder_input_ids` in case of a decoder) before feeding them to the Transformer and then you don't need to shift the `logits`. Examples of models that do this are T5, BART, etc.\r\n\r\nThis can probably be confirmed by fine-tuning `BlipForQuestionAnswering` on 10 example image, question and answer triplets and see whether the model is able to overfit them.",
"@NielsRogge @younesbelkada I just tried out your suggestion, and sure enough, it could not overfit to them. All answers are of the form ([CLS], a, [some noun], ., [SEP]). With the current implementation, the first shift changes that to (\"\", [cls], a, [some noun], .). Then, the second shift changes the pairing to inputs = (\"\", [cls], a, [some noun]) and labels = (a, [some noun], ., [sep]), e.g., next-next token prediction.\r\n\r\nThe outputs there are:\r\n```\r\n['', 'a', '.', 'cat', '[SEP]']\r\n['', 'a', '.', 'dog', '[SEP]']\r\n['', 'a', '.', 'wolf', '[SEP]']\r\n['', 'a', '.', 'bear', '[SEP]']\r\n```\r\nDue to it learning next-next token prediction, \"\" is always followed by 'a', which is followed by '.' It never learns what should come after '.' but it just outputs the noun, which is then two tokens away from [SEP].\r\n\r\nHowever, I found a fix. Instead of shifting the decoder's `input_ids` but not the `labels`, shift _both_, but do NOT get rid of the final character (since that's [SEP], which it should learn as the final character). Here's my code:\r\n```python\r\n# Now, train without redundant shift!\r\nmodel = BlipForQuestionAnswering.from_pretrained(\"Salesforce/blip-vqa-base\").to(device)\r\nprocessor = AutoProcessor.from_pretrained(\"Salesforce/blip-vqa-base\")\r\noptimizer = transformers.AdamW(model.parameters(), lr=3e-5)\r\n\r\nreturn_dict = model.config.use_return_dict\r\noutput_attentions = model.config.output_attentions\r\noutput_hidden_states = model.config.output_hidden_states\r\nfor i in range(40):\r\n total_loss = 0\r\n for inputs in training_points:\r\n # Copy-pasted code from BlipForQuestionAnswering.forward()\r\n vision_outputs = model.vision_model(\r\n pixel_values=inputs.pixel_values,\r\n output_attentions=output_attentions,\r\n output_hidden_states=output_hidden_states,\r\n return_dict=return_dict,\r\n )\r\n image_embeds = vision_outputs[0]\r\n image_attention_mask = torch.ones(image_embeds.size()[:-1], dtype=torch.long)\r\n question_embeds = model.text_encoder(\r\n input_ids=inputs.input_ids,\r\n encoder_hidden_states=image_embeds,\r\n encoder_attention_mask=image_attention_mask,\r\n return_dict=return_dict,\r\n )\r\n question_embeds = question_embeds[0] if not return_dict else question_embeds.last_hidden_state\r\n \r\n # Shift both the labels AND the input_ids. However, do not delete final [SEP] character.\r\n labels = inputs.labels.new_zeros(inputs.labels.shape[0], inputs.labels.shape[1] + 1)\r\n labels[..., 1:] = inputs.labels\r\n labels[..., 0] = model.decoder_start_token_id\r\n \r\n output = model.text_decoder(\r\n input_ids=labels,\r\n encoder_hidden_states=question_embeds,\r\n labels=labels,\r\n return_dict=return_dict,\r\n reduction=\"mean\",\r\n )\r\n\r\n \r\n loss = output.loss.mean() if return_dict else answer_output[0].mean()\r\n total_loss += loss\r\n optimizer.zero_grad()\r\n loss.backward()\r\n optimizer.step()\r\n```\r\nThen, when decoding:\r\n```python\r\nfor inputs in training_points:\r\n outputs = model.generate(input_ids = inputs.input_ids,\r\n pixel_values = inputs.pixel_values)\r\n print(processor.batch_decode(outputs[0]))\r\n```\r\n(where the `input_ids` are tokens for \"What animal is this?\")\r\n\r\nThe end result:\r\n```\r\n['', '[CLS]', 'a', 'cat', '.', '[SEP]']\r\n['', '[CLS]', 'a', 'dog', '.', '[SEP]']\r\n['', '[CLS]', 'a', 'wolf', '.', '[SEP]']\r\n['', '[CLS]', 'a', 'bear', '.', '[SEP]']\r\n```\r\n\r\n(Sorry I can't link directly to my code -- if it's really necessary/convenient, let me know and I can convert it into a colab nb)",
"Hi @verityw \r\n\r\nThanks for flagging this! I made an attempt to fix your issue in https://github.com/huggingface/transformers/pull/23153\r\nI am not sure this fixes 100% your problem, as I don't have access to your code, can you try to uninstall `transformers` and install `transformers` from that branch and let us know if you still face the issue?\r\n```bash\r\npip install git+https://github.com/younesbelkada/transformers.git@blip-qa-loss-fix\r\n```"
] | 1,682 | 1,684 | 1,684 |
NONE
| null |
### System Info
- `transformers` version: 4.28.1
- Platform: Linux-5.15.0-69-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.14.0
- Safetensors version: not installed
- PyTorch version (GPU?): 1.10.1+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
In `BlipForQuestionAnswering.forward()`, if the `labels` parameter is provided, then [lines 1218-1222](https://github.com/huggingface/transformers/blob/v4.28.1/src/transformers/models/blip/modeling_blip.py#L1218) set `decoder_input_ids` to the right-shifted version of `labels`, then passes both those variables into `self.text_decoder`, which is an instance of `BlipTextLMHeadModel`:
```python
if labels is not None and decoder_input_ids is None:
# get decoder inputs from shifting lm labels to the right - this is used in training mode
decoder_input_ids = self._shift_right(labels)
# replace possible -100 values in labels by `pad_token_id`
labels = labels.masked_fill(labels == self.decoder_pad_token_id, -100)
answer_output = self.text_decoder(
input_ids=decoder_input_ids,
attention_mask=decoder_attention_mask,
encoder_hidden_states=question_embeds,
encoder_attention_mask=attention_mask,
labels=labels,
return_dict=return_dict,
reduction="mean",
)
```
However, in the code for `BlipTextLMHeadModel.forward()`, it seems like it's [already doing that shift for you](https://github.com/huggingface/transformers/blob/04ab5605fbb4ef207b10bf2772d88c53fc242e83/src/transformers/models/blip/modeling_blip_text.py#L888):
```python
if labels is not None:
# we are doing next-token prediction; shift prediction scores and input ids by one
shifted_prediction_scores = prediction_scores[:, :-1, :].contiguous()
labels = labels[:, 1:].contiguous().to(shifted_prediction_scores.device)
loss_fct = CrossEntropyLoss(reduction=reduction, label_smoothing=0.1)
lm_loss = loss_fct(shifted_prediction_scores.view(-1, self.config.vocab_size), labels.view(-1))
if reduction == "none":
lm_loss = lm_loss.view(prediction_scores.size(0), -1).sum(1)
```
Am I just misinterpreting this, or is the shift done twice, i.e., the loss is for next-next token prediction??
EDIT: As another point, the official [Jupyter notebook for BLIP](https://github.com/huggingface/notebooks/blob/main/examples/image_captioning_blip.ipynb) creates and instance of and trains `BlipForConditionalGeneration`, which also uses `BlipTextLMHeadModel` as the decoder. In this case, the `input_ids` and `labels` are the same (not shifted):
```python
for idx, batch in enumerate(train_dataloader):
input_ids = batch.pop("input_ids").to(device)
pixel_values = batch.pop("pixel_values").to(device)
outputs = model(input_ids=input_ids,
pixel_values=pixel_values,
labels=input_ids)
```
Inside `BlipForConditionalGeneration.forward()`, it also doesn't shift the tokens:
```python
outputs = self.text_decoder(
input_ids=input_ids,
attention_mask=attention_mask,
encoder_hidden_states=image_embeds,
labels=labels,
return_dict=return_dict,
reduction="mean",
)
```
EDIT 2: Seems like the original BLIP code similarly only shifts once. In `BLIP_VQA.forward()`, located [here](https://github.com/salesforce/BLIP/blob/main/models/blip_vqa.py#L51), there is no shift:
```python
answer = self.tokenizer(answer, padding='longest', return_tensors="pt").to(image.device)
answer.input_ids[:,0] = self.tokenizer.bos_token_id
answer_targets = answer.input_ids.masked_fill(answer.input_ids == self.tokenizer.pad_token_id, -100)
question_output = self.text_encoder(question.input_ids,
attention_mask = question.attention_mask,
encoder_hidden_states = image_embeds,
encoder_attention_mask = image_atts,
return_dict = True)
question_states = []
question_atts = []
for b, n in enumerate(n):
question_states += [question_output.last_hidden_state[b]]*n
question_atts += [question.attention_mask[b]]*n
question_states = torch.stack(question_states,0)
question_atts = torch.stack(question_atts,0)
answer_output = self.text_decoder(answer.input_ids,
attention_mask = answer.attention_mask,
encoder_hidden_states = question_states,
encoder_attention_mask = question_atts,
labels = answer_targets,
return_dict = True,
reduction = 'none',
)
```
and there is a shift in `self.text_decoder.forward()`, as seen [here](https://github.com/salesforce/BLIP/blob/main/models/med.py#L904):
```python
prediction_scores = self.cls(sequence_output)
if return_logits:
return prediction_scores[:, :-1, :].contiguous()
lm_loss = None
if labels is not None:
# we are doing next-token prediction; shift prediction scores and input ids by one
shifted_prediction_scores = prediction_scores[:, :-1, :].contiguous()
labels = labels[:, 1:].contiguous()
loss_fct = CrossEntropyLoss(reduction=reduction, label_smoothing=0.1)
lm_loss = loss_fct(shifted_prediction_scores.view(-1, self.config.vocab_size), labels.view(-1))
if reduction=='none':
lm_loss = lm_loss.view(prediction_scores.size(0),-1).sum(1)
```
Only the `text_decoder` itself shifts the text (again, in the forward function).
### Expected behavior
N/A
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/23000/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/23000/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22999
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22999/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22999/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22999/events
|
https://github.com/huggingface/transformers/issues/22999
| 1,684,342,609 |
I_kwDOCUB6oc5kZQdR
| 22,999 |
Help on Firewalled installation
|
{
"login": "SupreethRao99",
"id": 55043035,
"node_id": "MDQ6VXNlcjU1MDQzMDM1",
"avatar_url": "https://avatars.githubusercontent.com/u/55043035?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SupreethRao99",
"html_url": "https://github.com/SupreethRao99",
"followers_url": "https://api.github.com/users/SupreethRao99/followers",
"following_url": "https://api.github.com/users/SupreethRao99/following{/other_user}",
"gists_url": "https://api.github.com/users/SupreethRao99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SupreethRao99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SupreethRao99/subscriptions",
"organizations_url": "https://api.github.com/users/SupreethRao99/orgs",
"repos_url": "https://api.github.com/users/SupreethRao99/repos",
"events_url": "https://api.github.com/users/SupreethRao99/events{/privacy}",
"received_events_url": "https://api.github.com/users/SupreethRao99/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"That is not possible. You can only use models from the Hub or locally downloaded.",
"Thanks @sgugger ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,682 | 1,685 | 1,685 |
NONE
| null |
Hello,
I would like to install huggingface in a firewalled environment. I have a Git-lfs repository with all my models stored, I would like to use them with the transformers libraries' `.from_pretrained()` feature, I referred to this (https://github.com/huggingface/huggingface_hub/tree/main/src/huggingface_hub) to set up the git-repo, could someone help me use the models stored on my git instead of the huggingface hub.
Thanks
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22999/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22999/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22998
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22998/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22998/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22998/events
|
https://github.com/huggingface/transformers/pull/22998
| 1,683,898,396 |
PR_kwDOCUB6oc5PJF99
| 22,998 |
Fix typo in mega.mdx
|
{
"login": "dleve123",
"id": 1561546,
"node_id": "MDQ6VXNlcjE1NjE1NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1561546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dleve123",
"html_url": "https://github.com/dleve123",
"followers_url": "https://api.github.com/users/dleve123/followers",
"following_url": "https://api.github.com/users/dleve123/following{/other_user}",
"gists_url": "https://api.github.com/users/dleve123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dleve123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dleve123/subscriptions",
"organizations_url": "https://api.github.com/users/dleve123/orgs",
"repos_url": "https://api.github.com/users/dleve123/repos",
"events_url": "https://api.github.com/users/dleve123/events{/privacy}",
"received_events_url": "https://api.github.com/users/dleve123/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,682 | 1,682 | 1,682 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes a typo in the Mega documentation.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Documentation: @sgugger, @stevhliu and @MKhalusova
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22998/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22998/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22998",
"html_url": "https://github.com/huggingface/transformers/pull/22998",
"diff_url": "https://github.com/huggingface/transformers/pull/22998.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22998.patch",
"merged_at": 1682459926000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22997
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22997/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22997/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22997/events
|
https://github.com/huggingface/transformers/pull/22997
| 1,683,895,371 |
PR_kwDOCUB6oc5PJFUp
| 22,997 |
Add Missing tokenization test [electra]
|
{
"login": "IMvision12",
"id": 88665786,
"node_id": "MDQ6VXNlcjg4NjY1Nzg2",
"avatar_url": "https://avatars.githubusercontent.com/u/88665786?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IMvision12",
"html_url": "https://github.com/IMvision12",
"followers_url": "https://api.github.com/users/IMvision12/followers",
"following_url": "https://api.github.com/users/IMvision12/following{/other_user}",
"gists_url": "https://api.github.com/users/IMvision12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IMvision12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IMvision12/subscriptions",
"organizations_url": "https://api.github.com/users/IMvision12/orgs",
"repos_url": "https://api.github.com/users/IMvision12/repos",
"events_url": "https://api.github.com/users/IMvision12/events{/privacy}",
"received_events_url": "https://api.github.com/users/IMvision12/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks a lot for your PR! @ArthurZucker could you review?",
"@sgugger Any updates, is there something wrong?",
"@sgugger Done!!",
"Thanks for contributing 🔥 \r\n"
] | 1,682 | 1,684 | 1,684 |
CONTRIBUTOR
| null |
# What does this PR do?
Added tokenization test for electra
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22997/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22997/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22997",
"html_url": "https://github.com/huggingface/transformers/pull/22997",
"diff_url": "https://github.com/huggingface/transformers/pull/22997.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22997.patch",
"merged_at": 1684334715000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22996
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22996/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22996/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22996/events
|
https://github.com/huggingface/transformers/pull/22996
| 1,683,693,633 |
PR_kwDOCUB6oc5PIZHL
| 22,996 |
Make `_test_xla_generate` less flaky
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,682 | 1,682 | 1,682 |
COLLABORATOR
| null |
# What does this PR do?
Make `_test_xla_generate` less flaky by relaxing the condition:
- if number of examples < 10: be strict, no difference is allowed
- otherwise, only fail the test if there are more than 10% of examples give different outputs between XLA and non-XLA versions.
Since this test is slow (generation), better not to decorate with `is_flaky`.
For `TFPegasusModelTest::test_xla_generate_slow`: there were more than 10 failures in 70 runs. With this PR, 0 failure shows.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22996/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22996/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22996",
"html_url": "https://github.com/huggingface/transformers/pull/22996",
"diff_url": "https://github.com/huggingface/transformers/pull/22996.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22996.patch",
"merged_at": 1682681248000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22995
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22995/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22995/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22995/events
|
https://github.com/huggingface/transformers/pull/22995
| 1,683,632,088 |
PR_kwDOCUB6oc5PILh-
| 22,995 |
Added tokenizer kwargs for fill mask pipeline
|
{
"login": "sajeedmehrab",
"id": 65396882,
"node_id": "MDQ6VXNlcjY1Mzk2ODgy",
"avatar_url": "https://avatars.githubusercontent.com/u/65396882?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sajeedmehrab",
"html_url": "https://github.com/sajeedmehrab",
"followers_url": "https://api.github.com/users/sajeedmehrab/followers",
"following_url": "https://api.github.com/users/sajeedmehrab/following{/other_user}",
"gists_url": "https://api.github.com/users/sajeedmehrab/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sajeedmehrab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sajeedmehrab/subscriptions",
"organizations_url": "https://api.github.com/users/sajeedmehrab/orgs",
"repos_url": "https://api.github.com/users/sajeedmehrab/repos",
"events_url": "https://api.github.com/users/sajeedmehrab/events{/privacy}",
"received_events_url": "https://api.github.com/users/sajeedmehrab/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @Narsil ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22995). All of your documentation changes will be reflected on that endpoint.",
"Can we refactor in order to do :\r\n\r\n```python\r\n output = fill_mask_pipeline(\"Text to predict <mask>\", tokenizer_kwargs=tokenizer_kwargs)\r\n ```\r\n Instead ? Accepting kwargs directly is very hard to maintain down the line because of clashing arguments (for instance `max_length` is one that pops up often enough).\r\n \r\n We can also whiteliste some parameters like `truncation` or `padding` to make them more convenient. but enabling all the kwargs directly is really not something we want I think.\r\n \r\n Thanks for the contribution though, it's a step in the good direction ! ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,682 | 1,685 | 1,685 |
NONE
| null |
Added tokenizer kwargs for the fill mask pipeline, which enables to truncate/pad/specify max length etc for the tokenizer. Pipeline can be used as follows following the edit:
`from transformers import pipeline`
`fill_mask_pipeline = pipeline(`
` 'fill-mask', `
` model=model, `
` tokenizer=tokenizer, `
` device=0`
`)`
`tokenizer_kwargs = {'truncation':True, 'max_length':2048}`
`output = fill_mask_pipeline("Text to predict <mask>", **tokenizer_kwargs)`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22995/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22995/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22995",
"html_url": "https://github.com/huggingface/transformers/pull/22995",
"diff_url": "https://github.com/huggingface/transformers/pull/22995.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22995.patch",
"merged_at": null
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.